Qualitative vs Quantitative Research Methods & Data Analysis

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.
  • Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed numerically. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.
  • Qualitative research gathers non-numerical data (words, images, sounds) to explore subjective experiences and attitudes, often via observation and interviews. It aims to produce detailed descriptions and uncover new insights about the studied phenomenon.

On This Page:

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography .

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis .

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Mixed methods research
  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Don't Miss a Post! Subscribe

Selected Reads

  • Book Summaries
  • Books for Teachers
  • Research Methodology Books
  • Themed Book Lists
  • Beyond Books

Selected Reads

Selected Reads

A blog for bibliophiles covering everything related to books from reviews and summaries to quotes and open articles.

What is Quantitative Research According to Authors?

By Med Kharbach, PhD | Published: May 9, 2023 | Updated: August 15, 2024

quantitative research environment meaning

In this post, we will discuss the concept of quantitative research as viewed through the lens of various esteemed authors. The aim is to provide a holistic view of this research method, focusing particularly on guiding beginner researchers and graduate students towards seminal works that offer invaluable insights into the field.

Quantitative research is a pivotal aspect of academic inquiry, and understanding its fundamentals is crucial for anyone venturing into the realm of research. We’ll explore the definitions and perspectives of quantitative research according to John Creswell, along with other notable scholars in the field. These insights are not only foundational for grasping the essence of quantitative research but also serve as a beacon for those navigating the often-complex landscape of academic research methodologies.

Related: 12 Good Books on How to Write and Publish Research Papers

Here are some key definitions of quantitative research according to different scholars:

1.Quantitative Research According to John Creswell

Creswell (2014) defines quantitative research as :

an inquiry into a social or human problem, based on testing a theory composed of variables, measured with numbers, and analyzed with statistical procedures, in order to determine whether the predictive generalizations of the theory hold true. The final written report has a set structure consisting of introduction, literature and theory, methods, results, and discussion. Like qualitative researchers, those who engage in this form of inquiry have assumptions about testing theories deductively, building in protections against bias, controlling for alternative or counterfactual explanations, and being able to generalize and replicate the findings. (p. 4) Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: SAGE Publications.

To elaborate, Creswell’s definition highlights key aspects of quantitative research, emphasizing its focus on testing objective theories by examining relationships among variables. In this approach, variables are measurable and quantifiable, allowing researchers to gather numerical data that can be systematically analyzed using statistical methods.

Quantitative research is grounded in a positivist paradigm, which assumes that there is an objective reality that can be measured and understood through empirical observation. By employing standardized and structured instruments, such as surveys and experiments, researchers seek to minimize subjective biases and ensure the reliability and validity of their findings.

The process typically involves the formulation of specific hypotheses derived from existing theories, which are then tested through the analysis of data. This deductive approach enables researchers to confirm, refute, or refine their theoretical assumptions based on empirical evidence.

Statistical procedures play a crucial role in quantitative research, as they help identify patterns, trends, and relationships among variables. Descriptive statistics provide an overview of the data, while inferential statistics allow researchers to make generalizations from their sample to the broader population.

In summary, Creswell’s definition of quantitative research emphasizes its objective nature, the examination of relationships among measurable variables, and the use of statistical procedures for data analysis. This approach is instrumental in generating evidence-based insights, informing decision-making processes, and advancing knowledge across various fields.

For more, check out this detailed post titled What is Quantitative Research According to Creswell?

Quantitative Research According to Punch

Punch (1998) contrasts quantitative research with qualitative research stating that the earlier represents “empirical research where the data are in the form of numbers” and the latter represents an “empirical research where the data are not in the form of numbers” (p. 4).

As you can see, Punch’s definition of quantitative and qualitative research provides a straightforward distinction between the two methodologies based on the type of data collected. 

Quantitative research, as Punch defines it, relies on numerical data. This approach allows for precise measurements, statistical analysis, and the identification of patterns, trends, and relationships among variables.

Quantitative research, as I stated earlier, is often grounded in the positivist paradigm, which assumes an objective reality that can be studied and understood through empirical observation. Examples of quantitative research methods include surveys, experiments, and structured observations.

On the other hand, qualitative research focuses on non-numerical data, such as words, images, or actions. This approach aims to capture the complexity and richness of human experiences and social phenomena.

Qualitative research is often rooted in the interpretivist or constructivist paradigm, which acknowledges that reality is subjective and co-constructed by individuals through their experiences and interpretations. Examples of qualitative research methods include interviews, focus groups, ethnography, and content analysis.

In summary, Punch distinguishes quantitative and qualitative research based on the nature of the data collected, with the former involving numerical data and the latter focusing on non-numerical data. This distinction reflects the different epistemological assumptions, research methods, and analytical approaches employed in each methodology.

3.Quantitative Research According to Leavy Patricia

According to Leavy Patricia (2022), Quantitative research :

“values breadth, statistical descriptions, and generalizability. Quantitative approaches to research center on achieving objectivity, control, and precise measurement. Methodological, these approaches rely on deductive designs aimed at refuting or building evidence in favor of specific theories and hypotheses. Marianne Fallon (2016) refers to quantitative research as a ‘top down process’ (p. 3). Quantitative approaches are most commonly used in explanatory research investigating causal relationships, associations, and correlations.” (p. 99) Patricia, L. (2022). Research Design: Quantitative, Qualitative, Mixed Methods, Arts-Based, and Community-Based Participatory Research Approaches. Guilford Publications.

In this excerpt, Leavy (2022) characterizes quantitative research as an approach that values breadth, statistical descriptions, and generalizability. The focus of quantitative research is on achieving objectivity, control, and precise measurement, which is achieved through the use of structured and standardized methods. This approach is grounded in a deductive research design, which starts with theories and hypotheses that are then tested and validated or refuted based on empirical evidence.

Fallon (2016, cited by Leavy) describes quantitative research as a “top-down process” (p. 3), which emphasizes the importance of established theories and prior research in guiding the formulation of new hypotheses. This approach allows researchers to build upon existing knowledge and refine theoretical frameworks.

quantitative research environment meaning

Quantitative research is particularly well-suited for explanatory research, as it seeks to uncover causal relationships, associations, and correlations among variables. By employing rigorous sampling techniques and statistical analyses, quantitative researchers can identify patterns and relationships in the data, which can then be generalized to the broader population.

In conclusion, Leavy (2022) highlights the key aspects of quantitative research, emphasizing its focus on breadth, statistical descriptions, generalizability, objectivity, control, precise measurement, and explanatory power. This approach provides valuable insights into causal relationships and associations, contributing to the advancement of knowledge across various fields.

4.Quantitative Research According to Kothari

Let me share with you this lengthy passage by Kothari (2004) explaining quantitative research. According to Kothari (2004), quantitative research:

involves the generation of data in quantitative form which can be subjected to rigorous quantitative analysis in a formal and rigid fashion. This approach can be further sub-classified into inferential, experimental and simulation approaches to research. The purpose of inferential approach to research is to form a database from which to infer characteristics or relationships of population. This usually means survey research where a sample of population is studied (questioned or observed) to determine its characteristics, and it is then inferred that the population has the same characteristics. Experimental approach is characterised by much greater control over the research environment and in this case some variables are manipulated to observe their effect on other variables. Simulation approach involves the construction of an artificial environment within which relevant information and data can be generated. This permits an observation of the dynamic behaviour of a system (or its sub-system) under controlled conditions. The term ‘simulation’ in the context of business and social sciences applications refers to “‘the operation of a numerical model that represents the structure of a dynamic process. Given the values of initial conditions, parameters and exogenous variables, a simulation is run to represent the behaviour of the process over time.” Simulation approach can also be useful in building models for understanding future conditions. (p. 5) Kothari, C. R. (2004). Research Methodology: Methods & Techniques. New Age International.

Kothari (2004) provides a comprehensive overview of quantitative research, emphasizing its focus on generating data that can be subjected to rigorous quantitative analysis in a formal and rigid manner. The author further categorizes quantitative research into three sub-approaches: inferential, experimental, and simulation.

1. Inferential approach: This approach is commonly used in survey research, where a sample of the population is studied to determine its characteristics. Researchers then infer that the larger population shares these characteristics. The goal is to understand the population’s characteristics or relationships based on the analyzed data from the sample.

2. Experimental approach: This approach is characterized by greater control over the research environment, where variables are manipulated to observe their effects on other variables. Experimental research is used to establish cause-and-effect relationships and often involves controlled settings and random assignment of participants to different conditions.

3. Simulation approach: This approach entails creating an artificial environment to generate relevant data and observe the dynamic behavior of a system or its sub-systems under controlled conditions. In the context of business and social sciences, simulation refers to the operation of a numerical model representing the structure of a dynamic process. This approach helps in building models for understanding future conditions and predicting potential outcomes.

In summary, Kothari (2004) delineates quantitative research as a method that generates and analyzes data in a systematic, rigorous manner, further sub-dividing it into inferential, experimental, and simulation approaches. Each sub-approach offers unique insights and techniques for understanding various aspects of the phenomena under investigation.

5. Quantitative Research According to Williams, Malcolm, et al.

Williams et al. (2022) define quantitative research as:

investigations in which the data that are collected and coded are expressible as numbers. By contrast, studies in which data are collected and coded as words would be instances of qualitative research. Weightier distinctions have also been important in discussions of research methods – distinctions bordering on epistemologies, worldviews and ontologies, to name a few… Quantitative research is grounded in the scientific tradition, so description and inference with the potential to lead to causal explanation and prediction are its core business. Its methods are those of the experiment, the social survey or the analysis of official statistics or naturally occurring data. It can take many forms from a local neighbourhood survey to large-scale population surveys with several thousand people taking part. It may be a carefully controlled experiment in a laboratory, or it might be ‘big-data’ analysis of millions of Twitter feeds. (p. 3) Williams et al. (2022). Beginning Quantitative Research. SAGE Publications, Limited.

In this passage, Williams et al. (2022) provide a rule-of-thumb definition of quantitative research as investigations where the collected and coded data can be expressed as numbers, while qualitative research deals with data collected and coded as words. The authors acknowledge that more profound distinctions exist, touching upon epistemologies, worldviews, and ontologies.

Quantitative research is rooted in the scientific tradition, focusing on description and inference, with the potential to lead to causal explanation and prediction. The methods employed in quantitative research include experiments, social surveys, and the analysis of official statistics or naturally occurring data.

The scope of quantitative research can vary widely, from small-scale neighborhood surveys to large-scale population studies involving thousands of participants. It can also encompass controlled experiments in laboratories or the analysis of vast amounts of data, such as millions of Twitter feeds, commonly referred to as “big data.”

In summary, Williams et al. (2022) highlight the numerical nature of quantitative research and its grounding in the scientific tradition. This approach aims to describe, infer, and potentially explain causal relationships and make predictions using various methods, ranging from small-scale surveys to large-scale big data analysis.

After reviewing the various definitions of quantitative research offered by scholars, it becomes clear that this approach is a systematic, empirical method grounded in the scientific tradition and positivist paradigm. The core aspects of quantitative research are:

  • Numerical Data Collection and Analysis : Utilizing structured and standardized methods such as surveys, experiments, or analysis of naturally occurring data to gather numerical data.
  • Objectivity and Precision : Emphasizing objectivity, control, precision, generalizability, and the establishment of cause-and-effect relationships or correlations.
  • Deductive Reasoning : Starting with theories and hypotheses that are tested and validated or refuted based on empirical evidence.
  • Statistical Analysis : Applying statistical procedures to analyze data, identify patterns, trends, and relationships, and make inferences or predictions about a broader population.

Quantitative research is essential for advancing knowledge in various fields by providing evidence-based insights, informing decision-making processes, and building upon existing theories. Despite differences in emphasis among scholars, the core characteristics of quantitative research converge on the systematic collection and analysis of numerical data, pursuit of objectivity and generalizability, and reliance on statistical procedures for data interpretation. This approach continues to play a vital role in enriching our understanding of the world and informing practical applications in diverse disciplines.

  • Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: SAGE Publications.
  • Kothari, C. R. (2004). Research Methodology: Methods & Techniques . New Age International.
  • Patricia, L. (2022). Research Design: Quantitative, Qualitative, Mixed Methods, Arts-Based, and Community-Based Participatory Research Approaches . Guilford Publications.
  • Punch, K. F. (1998). I ntroduction to social research: Quantitative and qualitative approaches . Thousand Oaks, CA: SAGE Publications.
  • Williams, et al. (2022). Beginning Quantitative Research . SAGE Publications, Limited.

Two other interesting works to consider are:

  • Tashakkori, A. & Teddlie, C. (2009). Integrating Qualitative and Quantitative Approaches to Research. In Bickman,l. & Debra J. Rog. (Eds.). T he SAGE Handbook of Applied Social Research Methods . SAGE Publications, Inc.
  • O’Leary, Z. (2009) The Essential Guide to Doing Your Research Project. London: Sage
  • 8 Good Books on Quantitative Research , Selected Reads

Related Posts

quantitative research environment meaning

Meet Med Kharbach, PhD

Dr. Med Kharbach is an influential voice in the global educational landscape, with an extensive background in educational studies and a decade-long experience as a K-12 teacher. Holding a Ph.D. from Mount Saint Vincent University in Halifax, Canada, he brings a unique perspective to the educational world by integrating his profound academic knowledge with his hands-on teaching experience. Dr. Kharbach's academic pursuits encompass curriculum studies, discourse analysis, language learning/teaching, language and identity, emerging literacies, educational technology, and research methodologies. His work has been presented at numerous national and international conferences and published in various esteemed academic journals.

Join our mailing list

Subscribe to our email list for bite-sized book summaries, curated recommendations, and exclusive content.

Subscribe  for exclusive resources .

You have successfully joined our subscriber list.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Qualitative vs. Quantitative Research | Differences, Examples & Methods

Qualitative vs. Quantitative Research | Differences, Examples & Methods

Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.

When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.

Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.

Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.

Table of contents

The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.

Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.

Qualitative vs. quantitative research

Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).

Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).

However, some methods are more commonly used in one type or the other.

Quantitative data collection methods

  • Surveys :  List of closed or multiple choice questions that is distributed to a sample (online, in person, or over the phone).
  • Experiments : Situation in which different types of variables are controlled and manipulated to establish cause-and-effect relationships.
  • Observations : Observing subjects in a natural environment where variables can’t be controlled.

Qualitative data collection methods

  • Interviews : Asking open-ended questions verbally to respondents.
  • Focus groups : Discussion among a group of people about a topic to gather opinions that can be used for further research.
  • Ethnography : Participating in a community or organization for an extended period of time to closely observe culture and behavior.
  • Literature review : Survey of published works by other authors.

A rule of thumb for deciding whether to use qualitative or quantitative data is:

  • Use quantitative research if you want to confirm or test something (a theory or hypothesis )
  • Use qualitative research if you want to understand something (concepts, thoughts, experiences)

For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.

Quantitative research approach

You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”

You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.

Qualitative research approach

You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”

Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.

Mixed methods approach

You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.

It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.

Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.

Analyzing quantitative data

Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.

Applications such as Excel, SPSS, or R can be used to calculate things like:

  • Average scores ( means )
  • The number of times a particular answer was given
  • The correlation or causation between two or more variables
  • The reliability and validity of the results

Analyzing qualitative data

Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.

Some common approaches to analyzing qualitative data include:

  • Qualitative content analysis : Tracking the occurrence, position and meaning of words or phrases
  • Thematic analysis : Closely examining the data to identify the main themes and patterns
  • Discourse analysis : Studying how communication works in social contexts

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

[email protected]

SIS International Market Research

What is Quantitative Research?

June 24, 2024

SIS International

Quantitative Research is a structured way of collecting and analyzing data from different sources.

Quantitative research is a cornerstone of empirical inquiry in the vast landscape of research methodologies. It wields statistical tools and numerical data to uncover insights and trends. Its structured approach and ability to quantify phenomena have made it a linchpin in various industries and academic fields… But what exactly is quantitative research, and why does it matter?

What Is Quantitative Research?

Quantitative research is a systematic data collection and analysis approach emphasizing quantifiable and numerical data. It employs statistical and computational techniques to measure, analyze, and interpret phenomena to uncover patterns, relationships, and trends. Unlike qualitative research, which focuses on subjective experiences and meanings, quantitative research seeks to quantify variables and test hypotheses through rigorous methodologies such as surveys, experiments, and observational studies.

Why Do Businesses Need Quantitative Research?

Quantitative research provides businesses empirical data and numerical insights that guide decision-making processes. It collects and analyzes data on consumer preferences, market trends, and industry dynamics, businesses can make informed decisions that are grounded in evidence rather than intuition or guesswork.

This research enables businesses to identify and assess risks associated with market fluctuations, competitive pressures, and changing consumer behaviors. By conducting market analysis, trend forecasting, and predictive modeling, businesses can anticipate potential risks and develop strategies to mitigate their impact, safeguarding against unforeseen challenges.

Moreover , it allows businesses to assess their performance, measure key performance indicators (KPIs), and track progress toward organizational goals. 

What Are the Benefits of Quantitative Research?

Quantitative research offers numerous benefits to businesses seeking insights, making informed decisions, and driving strategic growth. Here are some key advantages:

  • Statistical Rigor : Research employs rigorous statistical methods and sampling techniques, ensuring that findings are reliable, replicable, and generalizable. 
  • Objectivity and Unbiased Analysis : Research emphasizes objectivity and impartiality in data collection and analysis, minimizing the influence of researcher bias or subjectivity. 
  • Quantifiable Insights : Research generates quantifiable data and numerical insights that are easy to interpret, compare, and analyze. 
  • Scalability and Efficiency : Research enables businesses to collect data from large samples efficiently and cost-effectively, making it suitable for studying trends, patterns, and behaviors at scale.
  • Predictive Capability : Research enables businesses to make predictions and forecasts based on statistical analysis of past and present data.

When to Conduct This Research

Determining when to conduct quantitative research depends on various factors, including the research objectives, the nature of the research question, and the availability of resources. Here are some critical considerations for deciding when to conduct quantitative research:

  • When Quantifiable Data is Needed : Quantitative research is appropriate when the research question requires numerical data that can be quantified, analyzed statistically, and used to test hypotheses or make predictions. If the research question involves measuring the prevalence of a phenomenon, assessing the relationship between variables, or comparing groups, quantitative methods are well-suited to provide precise, quantifiable answers.
  • When Generalizability is Desired : Quantitative research is often conducted when researchers seek to generalize findings to broader populations or contexts. Using random or probability sampling techniques and collecting data from large samples, researchers can obtain results representative of the target population. They can be generalized with a certain level of confidence. This is particularly important when making inferences about population characteristics, market trends, or public opinions.
  • When Objective and Replicable Findings are Needed : Quantitative research is valuable when researchers aim to produce objective, replicable findings free from bias and subjectivity. By employing standardized measurement tools, clear operational definitions, and systematic data collection procedures, researchers can minimize the influence of researcher bias and ensure the reliability and validity of their findings.

Quantitative Research Characteristics

Quantitative research is characterized by several key features that distinguish it from qualitative research methods:

• Structured Data Collection : Research systematically collects structured data using standardized measurement tools and instruments. Surveys, questionnaires, experiments, and observational studies are common methods for collecting numerical data from participants or sources.

• Statistical Analysis : Quantitative research relies on statistical analysis techniques to analyze and interpret numerical data. Descriptive statistics, inferential statistics, and multivariate analysis are commonly used to summarize data, test hypotheses, and identify patterns or relationships among variables.

• Large Sample Sizes : Research typically involves collecting data from large sample sizes to ensure statistical validity and generalizability of findings. Random sampling techniques are often used to select participants or sources from the population of interest, ensuring that the sample is representative of the target population.

• Objective and Replicable Findings : Quantitative research aims to produce objective, replicable findings that can be generalized to broader populations or contexts. Quantitative researchers minimize the influence of researcher bias and subjectivity by using standardized measurement tools, clear operational definitions, and systematic data collection procedures.

• Quantifiable Results : Quantitative research generates quantifiable results that can be expressed numerically and statistically. Variables are measured using numerical scales or categories, allowing researchers to quantify relationships, compare groups, and make predictions based on numerical data.

Quantitative Research vs Qualitative Research

Quantitative and qualitative research are two distinct approaches to data collection and analysis, each with strengths and limitations. Here are some key differences between quantitative and qualitative research methods:

• Nature of Data : Quantitative research collects numerical data and quantifies variables, allowing statistical analysis and hypothesis testing. In contrast, qualitative research collects non-numerical data in words, images, or observations, emphasizing rich descriptions, meanings, and interpretations.

• Research Design : Quantitative research typically follows a deductive approach, testing hypotheses using structured data collection methods and statistical analysis techniques. Qualitative research, on the other hand, often adopts an inductive approach, where theories and insights emerge from the data through open-ended inquiry and exploration.

• Sampling and Generalizability : Quantitative research typically uses random or probability sampling techniques to select participants or sources from the population of interest, aiming for a representative sample. Findings from quantitative studies can be generalized to broader populations with a certain level of confidence. In contrast, qualitative research may use purposeful or convenience sampling to select participants based on specific criteria or characteristics. While qualitative research provides rich, detailed insights into particular contexts or phenomena, findings may not be readily generalizable to other contexts.

• Data Analysis : Quantitative research relies on statistical analysis techniques, such as descriptive statistics, inferential statistics, and regression analysis, to analyze and interpret numerical data. Qualitative research employs qualitative data analysis methods, such as thematic analysis, content analysis, or grounded theory, to identify patterns, themes, and meanings within the data.

• Research Objectives : Quantitative research is often used to quantify relationships, test hypotheses, and measure the prevalence of phenomena within a population. It is well-suited for addressing research questions that require numerical data and statistical analysis. Qualitative research, on the other hand, is used to explore complex phenomena, understand social processes, and capture the subjective experiences of individuals. It is valuable for generating in-depth insights and understanding the context behind numerical trends or patterns.

Who Uses Quantitative Research

Quantitative research is applied across various fields and disciplines to address research questions, test hypotheses, and generate empirical evidence. Here are some examples of quantitative research in different contexts:

• Market Research : In market research, quantitative methods assess consumer preferences, market trends, and purchasing behavior. Surveys, experiments, and statistical analysis techniques measure brand awareness, customer satisfaction, and market share, allowing businesses to make data-driven decisions about product development, pricing strategies, and marketing campaigns.

• Healthcare Research : Quantitative research is used to study disease prevalence, treatment effectiveness, and healthcare outcomes. Clinical trials, epidemiological studies, and health surveys are conducted to collect numerical data on patient demographics, clinical measurements, and health outcomes. This enables researchers to evaluate the efficacy of medical interventions, identify risk factors, and inform public health policies.

• Education Research : In education research, quantitative methods assess student performance, educational attainment, and learning outcomes. Standardized tests, surveys, and statistical analysis techniques are employed to measure academic achievement, assess the effectiveness of teaching methods, and identify factors that influence student success, informing educational policies and practices.

• Social Science Research : Quantitative research is widely used in social science disciplines such as sociology, psychology, and political science to study social phenomena, attitudes, and behaviors. Surveys, experiments, and statistical analysis techniques collect numerical data on social attitudes, group dynamics, and political preferences, enabling researchers to test theories, identify patterns, and predict social trends and behavior.

• Environmental Research : In environmental research, quantitative methods are used to study environmental processes, assess environmental impacts, and monitor ecosystem changes. Remote sensing, GIS mapping, and statistical analysis techniques are employed to collect numerical data on environmental variables such as temperature, precipitation, and biodiversity. This enables researchers to assess environmental health, identify risks, and inform conservation efforts.

Major Case Studies

To illustrate the practical application and impact of quantitative research in real-world business scenarios, let’s explore some significant case studies:

Netflix, the leading streaming service provider, relies heavily on quantitative research to drive its content recommendation algorithms and personalized user experiences. By analyzing viewing patterns, user ratings, and demographic data, Netflix can recommend relevant content to individual users, enhance customer engagement, and reduce churn rates. This data-driven approach has contributed to Netflix’s rapid growth and dominance in the streaming industry.

Amazon utilizes quantitative research to optimize its product recommendations, pricing strategies, and supply chain operations. Through data analysis of customer purchase history, browsing behavior, and market trends, Amazon can personalize product recommendations, adjust pricing dynamically, and improve inventory management to meet customer demand efficiently. This data-driven approach has enabled Amazon to remain a market leader in e-commerce.

Uber, the ride-sharing platform, leverages quantitative research to optimize its pricing, driver allocation, and route optimization algorithms. By analyzing passenger demand patterns, driver availability, and traffic conditions in real-time, Uber can dynamically adjust prices, match drivers with passengers efficiently, and optimize routes to minimize wait times and travel costs. This data-driven approach has helped Uber disrupt the transportation industry and revolutionize people’s commutes.

Facebook relies on quantitative research to enhance user engagement, target advertising, and optimize content algorithms. Through data analysis of user interactions, demographic profiles, and content preferences, Facebook can personalize users’ news feeds, deliver targeted advertisements, and optimize content algorithms to maximize user engagement and ad revenue. This data-driven approach has made Facebook one of the most influential social media platforms globally.

Procter & Gamble :

Procter & Gamble, a multinational consumer goods company, uses quantitative research to inform product development, marketing strategies, and brand positioning. Through data analysis of consumer preferences, market trends, and competitive landscapes, Procter & Gamble can identify market opportunities, develop innovative products, and launch targeted marketing campaigns that resonate with consumers. This data-driven approach has contributed to Procter & Gamble’s success as a market leader in the consumer goods industry.

Expected Results from SIS’s Research

When businesses engage in quantitative research conducted by SIS International, they can expect several key outcomes and benefits:

Actionable Insights : 

SIS International’s quantitative research delivers actionable insights that businesses can use to inform strategic decision-making, optimize operations, and drive growth. SIS provides clients valuable insights into market trends, consumer behaviors, and competitive dynamics, enabling them to effectively identify opportunities and mitigate risks.

Data-driven Strategies : 

SIS’s quantitative research empowers businesses to develop data-driven strategies and initiatives grounded in evidence and supported by empirical research findings. Our experts enable clients to make informed decisions backed by robust data and analysis.

Competitive Advantage : 

Through SIS’s quantitative research, businesses gain a competitive advantage by gaining deeper insights into their target markets, customers, and competitors. By understanding consumer preferences, market demand, and emerging trends, clients can differentiate their offerings, refine their marketing messages, and stay ahead of the competition in today’s dynamic marketplace.

Measurable Results : 

SIS International’s quantitative research produces measurable results that enable clients to track performance, monitor progress, and evaluate the impact of their strategic initiatives over time. By establishing key performance indicators (KPIs) and benchmarks, clients can assess the effectiveness of their strategies and make data-driven adjustments to achieve their business objectives.

Strategic Partnerships : 

SIS International is a strategic partner to businesses, providing ongoing support and guidance throughout the research process. From study design and data collection to analysis and interpretation, SIS’s team of experienced researchers, analysts, and consultants work closely with clients to deliver customized solutions tailored to their needs and objectives.

Challenges 

Despite its numerous benefits, this research presents several challenges for businesses to navigate effectively. Here are some key challenges associated with quantitative research:

  • Complexity of Data Analysis : Research involves collecting and analyzing large volumes of numerical data, which can be complex and time-consuming to process and interpret. 
  • Sampling Bias : Sampling bias occurs when the sample population is not representative of the target population, leading to inaccurate or biased results. 
  • Limited Contextual Understanding : Quantitative research focuses on numerical data and statistical analysis, often at the expense of contextual understanding and qualitative insights. 
  • Survey Design Challenges : Designing effective survey instruments for research can be challenging, requiring careful attention to question wording, response options, survey length, and survey format.
  • Interpretation and Actionability : Interpreting quantitative findings and translating them into actionable insights can be challenging for businesses, particularly if they lack data analysis or statistical interpretation expertise.

Industry Attractiveness: SWOT Analysis of the Quantitative Research Market

Conducting a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) of the quantitative research market provides valuable insights into its current state and prospects:

Strengths :

  • Robust Data Analysis: Quantitative research offers advanced data analysis techniques, such as regression analysis, hypothesis testing, and predictive modeling, allowing businesses to derive actionable insights from large datasets.
  • Scalability: Quantitative research methods, such as surveys, experiments, and observational studies, can be scaled up to accommodate large sample sizes and generate statistically significant results, making them suitable for research projects of varying scopes and complexities.
  • Objectivity: Quantitative research emphasizes objectivity and standardization in data collection and analysis, minimizing subjective biases and ensuring the reliability and validity of research findings.
  • Statistical Rigor: Quantitative research employs rigorous statistical methods and procedures to test hypotheses, establish causal relationships, and draw meaningful conclusions from empirical data, enhancing the credibility and robustness of research findings.

Weaknesses :

  • Lack of Contextual Understanding: Quantitative research may lack the depth and nuance provided by qualitative research methods, such as interviews or focus groups, resulting in a limited understanding of the underlying motivations, attitudes, and behaviors of research participants.
  • Sampling Bias: Quantitative research is susceptible to sampling bias, where the sample population may not accurately represent the broader target population, leading to biased or unreliable research findings.
  • Complexity of Analysis: Analyzing quantitative data requires specialized skills in statistics, data analysis software, and research methodologies, which may pose challenges for businesses lacking in-house expertise or resources.
  • Inflexibility: Quantitative research methods typically follow predefined protocols and standardized procedures, which may limit flexibility and adaptability in addressing dynamic research questions or emerging research trends.

Opportunities :

  • Technological Advancements: Technological advances , such as big data analytics, machine learning, and artificial intelligence, present opportunities to enhance the efficiency, accuracy, and scalability of quantitative research methods, enabling businesses to derive deeper insights from complex datasets.
  • Cross-Industry Applications: Quantitative research methods have broad applications across diverse industries, including marketing, finance, healthcare, and social sciences, offering opportunities for businesses to leverage data-driven insights to inform strategic decision-making and drive innovation.
  • Global Market Expansion: The increasing globalization of markets and advancements in digital technologies have facilitated the expansion of quantitative research services to global markets, allowing businesses to access a wider pool of research participants and opportunities for international collaboration and growth.
  • Demand for Evidence-Based Decision-Making: In an increasingly competitive and data-driven business environment, there is a growing demand for evidence-based decision-making processes, driving the adoption of quantitative research methods among businesses seeking to gain a competitive edge and achieve sustainable growth.
  • Data Privacy Concerns: Heightened concerns about data privacy, security, and compliance regulations, such as GDPR and CCPA, pose threats to quantitative research activities, requiring businesses to implement robust data protection measures and ensure compliance with regulatory requirements.
  • Competition from Alternative Research Methods: The proliferation of alternative research methods, such as qualitative research, social media analytics, and sentiment analysis, may pose competitive threats to traditional quantitative research methods, challenging businesses to innovate and differentiate their offerings to meet evolving market demands.
  • Economic Uncertainty: Economic volatility, geopolitical tensions, and global crises, such as the COVID-19 pandemic, can disrupt business operations, reduce research budgets, and dampen demand for quantitative research services, posing threats to market growth and profitability.
  • Technological Disruption: Rapid technological advancements and disruptive innovations may render traditional quantitative research methods obsolete or less effective, requiring businesses to adapt to emerging technologies and evolving market trends to remain competitive in the research industry.

How SIS International’s  Solutions Help Businesses

SIS International Research offers a comprehensive suite of quantitative research solutions tailored to meet the unique needs and objectives of businesses across various industries. Here’s how SIS International’s quantitative research solutions help businesses:

Reduce Risk : 

SIS International’s quantitative research methodologies help businesses mitigate risk by providing data-driven insights and actionable recommendations for strategic decision-making. By conducting rigorous data analysis and statistical modeling, SIS International helps businesses identify market trends, customer preferences, and competitive dynamics, enabling them to anticipate risks and capitalize on opportunities in today’s dynamic business environment.

Boost Revenue : 

SIS International’s quantitative research services enable businesses to optimize their marketing, product offerings, and pricing strategies to maximize revenue generation. By conducting market segmentation studies, pricing sensitivity analysis, and brand perception studies, our consultants help businesses identify lucrative market segments, price products competitively, and enhance brand positioning to drive revenue growth and market share expansion.

Save Money : 

SIS International’s efficient research methodologies and scalable solutions help businesses save time and resources by streamlining the research process and delivering timely, cost-effective insights. We maximize research efficiency and minimize research costs, allowing businesses to achieve their research objectives within budget and on schedule.

Save Time : 

SIS International’s agile research approach and rapid turnaround times help businesses expedite the research process and accelerate decision-making timelines. Our experts deliver timely insights and actionable recommendations that enable businesses to stay ahead of the competition and capitalize on emerging market opportunities.

Accelerate Growth and Innovation : 

SIS International’s quantitative research solutions provide businesses with the insights and strategic guidance to drive growth and innovation in today’s competitive marketplace. By conducting market sizing studies, product concept testing, and innovation tracking studies, SIS International helps businesses identify unmet needs, evaluate market opportunities, and develop innovative solutions that resonate with target customers and drive sustainable growth and profitability.

Boost ROI : 

SIS International’s quantitative research services deliver measurable ROI by helping businesses optimize their marketing investments, product development, and strategic initiatives. By quantifying the impact of marketing campaigns, product launches, and business initiatives, we enable businesses to allocate resources effectively, optimize performance, and maximize returns on investment.

About SIS International

SIS International offers Quantitative, Qualitative, and Strategy Research. We provide data, tools, strategies, reports and insights for decision-making. We conduct interviews, surveys, focus groups and many other Market Research methods and approaches. Contact us for your next Market Research project.

Expand globally with confidence. Contact SIS International today!

Healthcare Cybersecurity Market Research

Cyber Security Market Research in Healthcare

Building Chinese Brands

Creating a Global Brand China

Subscribe to our Newsletter!

SIS International Research & Strategy

SIS International is a leading provider of Customer Insights, Market Research, Data Collection & Analysis, and Strategy Consulting.

Privacy Policy

Free E-books & Resources

Register for Focus Groups

Participate in Focus Groups

+1 917 536 0640

en_US

© 2024 SIS International Market Research

  • 1-800-NAT-UNIV (628-8648)
  • Bachelor of Arts Degree in Early Childhood Education (BAECE)
  • Bachelor of Arts in Early Childhood Development with an Inspired Teaching and Learning Preliminary Multiple Subject Teaching Credential (California)
  • Bachelor of Arts in English
  • Bachelor of Arts in History
  • Master of Arts in Social Emotional Learning
  • Master of Education in Inspired Teaching and Learning with a Preliminary Multiple and Single Subject Teaching Credential and Intern Option (CA)
  • Master of Arts in Education
  • Master of Early Childhood Education
  • Education Specialist
  • Doctor of Education
  • Doctor of Philosophy in Education
  • Doctor of Education in Educational Leadership
  • Ed.D. in Organizational Innovation
  • Certificate in Online Teaching (COT) Program
  • Online Medical Coding Program
  • Building Our Team Through Community Policing
  • Inspired Teaching and Learning with a Preliminary Single Subject Teaching Credential
  • Inspired Teaching and Learning with a Preliminary Multiple Subject Teaching Credential and Internship Option (California)
  • Preliminary Administrative Services Credential (CA Option)
  • Preliminary Education Specialist Credential: Mild/Moderate with Internship Option (CA)
  • All Teaching & Education
  • Associate of Science in Business
  • Bachelor of Business Administration
  • Bachelor of Science in Healthcare Administration
  • Bachelor of Arts in Management
  • Master of Business Administration (MBA)
  • Master of Public Health (MPH)
  • Master of Science in Data Science
  • Master of Public Administration
  • Doctor of Criminal Justice
  • Doctor of Philosophy in Organizational Leadership
  • Doctor of Business Administration
  • Doctor of Philosophy in Business Administration
  • Post-Baccalaureate Certificate in Business
  • Post-Master's Certificate in Business
  • Graduate Certificate in Banking
  • Certificate in Agile Project Management
  • All Business & Marketing
  • Bachelor of Science in Nursing (BSN) (California)
  • Bachelor of Science in Nursing (BSN) Second Bachelor Degree (California)
  • Bachelor of Science in Clinical Laboratory Science
  • Bachelor of Science in Public Health
  • Master of Science in Nursing
  • Master of Science in Health Informatics
  • Master of Healthcare Administration
  • Doctor of Nurse Anesthesia Practice (DNAP)
  • Doctor of Health Administration
  • Doctor of Nursing Practice in Executive Leadership
  • LVN to RN 30 Unit Option Certificate
  • Psychiatric Mental Health Nurse Practitioner Certificate
  • Family Nurse Practitioner Certificate
  • Emergency Medical Technician Certificate
  • All Healthcare & Nursing
  • Bachelor of Arts in Psychology
  • Bachelor of Arts in Integrative Psychology
  • Bachelor of Science in Criminal Justice Administration
  • Bachelor of Arts in Sociology
  • Master of Science in Applied Behavioral Analysis Degree
  • Master of Arts Degree in Counseling Psychology
  • Master of Arts in Consciousness, Psychology, and Transformation
  • Doctor of Clinical Psychology (PsyD) Program
  • Doctor of Philosophy in Marriage and Family Therapy
  • Doctor of Philosophy in Psychology
  • Doctorate of Marriage and Family Therapy
  • Graduate Certificate in Trauma Studies
  • Post-Master's Certificate in Psychology
  • Post-Baccalaureate Certificate in Applied Behavior Analysis
  • Pupil Personnel Services Credential School Counseling (PPSC)
  • University Internship Credential Program for Pupil Personnel Services School Counseling (California Only)
  • All Social Sciences & Psychology
  • Bachelor of Science in Cybersecurity
  • Bachelor of Science in Electrical and Computer Engineering
  • Bachelor of Science in Computer Science
  • Bachelor of Science in Construction Management
  • Master of Science in Cybersecurity
  • Master of Science in Computer Science
  • Master of Science in Engineering Management
  • Doctor of Philosophy in Data Science
  • Doctor of Philosophy in Computer Science
  • Doctor of Philosophy in Technology Management
  • Doctor of Philosophy in Cybersecurity
  • All Engineering & Technology
  • Associate of Arts in General Education
  • Bachelor of Arts in Digital Media Design
  • Bachelor of Arts in General Studies
  • Master of Arts in English
  • Master of Arts in Strategic Communication
  • Foreign Credential Bridge Program
  • All Arts & Humanities
  • Graduate Certificate in Forensic and Crime Scene Investigations
  • Bachelor of Public Administration
  • Bachelor of Science in Homeland Security and Emergency Management
  • Minor in Business Law
  • Master of Criminal Justice Leadership
  • Master of Forensic Sciences
  • Master of Science in Homeland Security and Emergency Management
  • Doctor of Public Administration
  • College of Law and Public Service
  • All Criminal Justice & Public Service
  • Paralegal Specialist Certificate Corporations
  • Paralegal Specialist Certificate Criminal Law
  • Paralegal Specialist Certificate Litigation
  • Associate of Science in Paralegal Studies
  • Bachelor of Arts in Pre-Law Studies
  • Bachelor of Science in Paralegal Studies
  • Juris Doctor
  • Associate of Science in Human Biology
  • Associate of Science in General Education
  • Bachelor of Science in Biology
  • Bachelor of Science in Mathematics
  • All Science & Math
  • Program Finder
  • Undergraduate Admissions
  • Graduate Program Admissions
  • Military Admissions
  • Early College
  • Credential & Certificate Programs
  • Transfer Information
  • Speak to an Advisor
  • How to Pay for College
  • Financial Aid
  • Scholarships
  • Tuition & Fees
  • NU offers a variety of scholarships to help students reduce their financial burden while focusing on achieving their goals. Explore Scholarships
  • Colleges/Schools
  • University Leadership
  • Office of the President
  • Academies at NU
  • Course Catalog
  • Accreditation
  • Workforce and Community Education
  • President’s Circle
  • Board of Trustees
  • NU Foundation
  • Military & Veterans
  • Coast Guard
  • Space Force
  • National Guard & Reservist
  • Military Spouses & Dependents
  • Military Resources
  • NU proudly serves active duty and Veteran students from all branches of the military — at home, on base, and abroad. Military Admissions
  • Online Degrees & Programs
  • Consumer Information
  • Student Login
  • Graduation Events
  • Student Portal
  • Student Bookstore
  • Student Resources
  • Dissertation Boot Camp
  • Show your NU pride and shop our online store for the latest and greatest NU apparel and accessories! Shop Now
  • Request Info
  • Our Programs

What Is Qualitative vs. Quantitative Study?

Bachelor of Science in Clinical Laboratory Science Program Page

Qualitative research focuses on understanding phenomena through detailed, narrative data. It explores the “how” and “why” of human behavior, using methods like interviews, observations, and content analysis. In contrast, quantitative research is numeric and objective, aiming to quantify variables and analyze statistical relationships. It addresses the “when” and “where,” utilizing tools like surveys, experiments, and statistical models to collect and analyze numerical data.

In This Article:

What is qualitative research, what is quantitative research.

  • How Do Qualitative and Quantitative Research Differ?

What’s the Difference Between a Qualitative and Quantitative Study?

Analyzing qualitative and quantitative data, when to use qualitative or quantitative research, develop your research skills at national university.

Qualitative and quantitative data are broad categories covering many research approaches and methods. While both share the primary aim of knowledge acquisition, quantitative research is numeric and objective, seeking to answer questions like when or where. On the other hand, qualitative research is concerned with subjective phenomena that can’t be numerically measured, like how different people experience grief.

Having a firm grounding in qualitative and quantitative research methodologies will become especially important once you begin work on your dissertation or thesis toward the end of your academic program. At that point, you’ll need to decide which approach best aligns with your research question, a process that involves working closely with your Dissertation Chair.

Keep reading to learn more about the difference between quantitative vs. qualitative research, including what research techniques they involve, how they approach the task of data analysis, and some strengths — and limitations — of each approach. We’ll also briefly examine mixed-method research, which incorporates elements of both methodologies.

Qualitative research differs from quantitative research in its objectives, techniques, and design. Qualitative research aims to gain insights into phenomena, groups, or experiences that cannot be objectively measured or quantified using mathematics. Instead of seeking to uncover precise answers or statistics in a controlled environment like quantitative research, qualitative research is more exploratory, drawing upon data sources such as photographs, journal entries, video footage, and interviews.

These features stand in stark contrast to quantitative research, as we’ll see throughout the remainder of this article.

Quantitative research tackles questions from different angles compared to qualitative research. Instead of probing for subjective meaning by asking exploratory “how?” and “why?” questions, quantitative research provides precise causal explanations that can be measured and communicated mathematically. While qualitative researchers might visit subjects in their homes or otherwise in the field, quantitative research is usually conducted in a controlled environment. Instead of gaining insight or understanding into a subjective, context-dependent issue, as is the case with qualitative research, the goal is instead to obtain objective information, such as determining the best time to undergo a specific medical procedure.

quantitative research environment meaning

How Does Qualitative and Quantitative Research Differ?

How are the approaches of quantitative and qualitative research different?

In qualitative studies, data is usually gathered in the field from smaller sample sizes, which means researchers might personally visit participants in their own homes or other environments. Once the research is completed, the researcher must evaluate and make sense of the data in its context, looking for trends or patterns from which new theories, concepts, narratives, or hypotheses can be generated.

Quantitative research is typically carried out via tools (such as questionnaires) instead of by people (such as a researcher asking interview questions). Another significant difference is that, in qualitative studies, researchers must interpret the data to build hypotheses. In a quantitative analysis, the researcher sets out to test a hypothesis.

Bachelor of Science in Allied Health Program Page

Both qualitative and quantitative studies are subject to rigorous quality standards. However, the research techniques utilized in each type of study differ, as do the questions and issues they hope to address or resolve. In quantitative studies, researchers tend to follow more rigid structures to test the links or relationships between different variables, ideally based on a random sample. On the other hand, in a qualitative study, not only are the samples typically smaller and narrower (such as using convenience samples), the study’s design is generally more flexible and less structured to accommodate the open-ended nature of the research.

Below are a few examples of qualitative and quantitative research techniques to help illustrate these differences further.

Sources of Quantitative Research

Some example methods of quantitative research methods or sources include, but are not limited to, the following:

  • Conducting polls, surveys, and experiments
  • Compiling databases of records and information
  • Observing the topic of the research, such as a specific reaction
  • Performing a meta-analysis, which involves analyzing multiple prior studies in order to identify statistical trends or patterns
  • Supplying online or paper questionnaires to participants

The following section will cover some examples of qualitative research methods for comparison, followed by an overview of mixed research methods that blend components of both approaches.

Sources of Qualitative Research

Researchers can use numerous qualitative methods to explore a topic or gain insight into an issue. Some sources of, or approaches to, qualitative research include the following examples:

  • Conducting ethnographic studies, which are studies that seek to explore different phenomena through a cultural or group-specific lens
  • Conducting focus groups
  • Examining various types of records, including but not limited to diary entries, personal letters, official documents, medical or hospital records, photographs, video or audio recordings, and even minutes from meetings
  • Holding one-on-one interviews
  • Obtaining personal accounts and recollections of events or experiences

Examples of Research Questions Best Suited for Qualitative vs. Quantitative Methods

Qualitative research questions:.

  • How do patients experience the process of recovering from surgery?
  • Why do some employees feel more motivated in remote work environments?
  • What are the cultural influences on dietary habits among teenagers?

Quantitative Research Questions:

  • What is the average recovery time for patients after surgery?
  • How does remote work impact employee productivity levels?
  • What percentage of teenagers adhere to recommended dietary guidelines?

These examples illustrate how qualitative research delves into the depth and context of human experiences, while quantitative research focuses on measurable data and statistical analysis.

Mixed Methods Research

In addition to the purely qualitative and quantitative research methods outlined above, such as conducting focus groups or performing meta-analyses, it’s also possible to take a hybrid approach that merges qualitative and quantitative research aspects. According to an article published by LinkedIn , “Mixed methods research avoids many [of the] criticisms” that have historically been directed at qualitative and quantitative research, such as the former’s vulnerability to bias, by “canceling the effects of one methodology by including the other methodology.” In other words, this mixed approach provides the best of both worlds. “Mixed methods research also triangulates results that offer higher validity and reliability.”

If you’re enrolled as a National University student, you can watch a video introduction to mixed-method research by logging in with your student ID. Our resource library also covers qualitative and quantitative research methodologies and a video breakdown of when to use which approach.

When it comes to quantitative and qualitative research, methods of collecting data differ, as do the methods of organizing and analyzing it. So what are some best practices for analyzing qualitative and quantitative data sets, and how do they call for different approaches by researchers?

How to Analyze Qualitative Data

Below is a step-by-step overview of how to analyze qualitative data.

  • Make sure all of your data is finished being compiled before you begin any analysis.
  • Organize and connect your data for consistency using computer-assisted qualitative data analysis software (CAQDAS).
  • Code your data, which can be partially automated using a feedback analytics platform.
  • Start digging deep into analysis, potentially using augmented intelligence to get more accurate results.
  • Report on your findings, ideally using engaging aids to help tell the story.

How to Analyze Quantitative Data

There are numerous approaches to analyzing quantitative data. Some examples include cross-tabulation, conjoint analysis, gap analysis, trend analysis, and SWOT analysis, which refers to Strengths, Weaknesses, Opportunities, and Threats.

Whichever system or systems you use, there are specific steps you should take to ensure that you’ve organized your data and analyzed it as accurately as possible. Here’s a brief four-step overview.

  • Connect measurement scales to study variables, which helps ensure that your data will be organized in the appropriate order before you proceed.
  • Link data with descriptive statistics, such as mean, median, mode, or frequency.
  • Determine what measurement scale you’ll use for your analysis.
  • Organize the data into tables and conduct an analysis using methods like cross-tabulation or Total Unduplicated Reach and Frequency (TURF) analysis.

people talking in front of whiteboard with notes written on it

Simply knowing the difference between quantitative and qualitative research isn’t enough — you also need an understanding of when each approach should be used and under what circumstances. For that, you’ll need to consider all of the comparisons we’ve made throughout this article and weigh some potential pros and cons of each methodology.

Pros and Cons of Qualitative Research

Qualitative research has numerous strengths, but the research methodology is only more appropriate for some projects or dissertations. Here are some strengths and weaknesses of qualitative research to help guide your decision:

  • Pro — More flex room for creativity and interpretation of results
  • Pro — Greater freedom to utilize different research techniques as the study evolves
  • Con — Potentially more vulnerable to bias due to their subjective nature
  • Con — Sample sizes tend to be smaller and non-randomized

Pros and Cons of Quantitative Research

Quantitative research also comes with drawbacks and benefits, depending on what information you aim to uncover. Here are a few pros and cons to consider when designing your study.

  • Pro — Large, random samples help ensure that the broader population is more realistically reflected
  • Pro — Specific, precise results can be easily communicated using numbers
  • Con — Data can suffer from a lack of context or personal detail around participant answers
  • Con — Numerous participants are needed, driving up costs while posing logistical challenges

If you dream of making a scientific breakthrough and contributing new knowledge that revolutionizes your field, you’ll need a strong foundation in research, from how it’s conducted and analyzed to a clear understanding of professional ethics and standards. By pursuing your degree at National University, you build stronger research skills and countless other in-demand job skills.

With flexible course schedules, convenient online classes , scholarships and financial aid , and an inclusive military-friendly culture, higher education has never been more achievable or accessible. At National University, you’ll find opportunities to challenge and hone your research skills in more than 75 accredited graduate and undergraduate programs and fast-paced credential and certificate programs in healthcare, business, engineering, computer science, criminal justice, sociology, accounting, and more.

Contact our admissions office to request program information, or apply to National University online today .

Learn More About Our University and Scholarships

Join our email list!

  • First Name *
  • Form Email Field
  • Consent * I agree to the terms and conditions below. *

Recent Resources

  • National University’s Online Ph.D. of Business Administration in Strategic Marketing Named One of the Best in the Nation August 20, 2024
  • How to Become an Investment Banker July 26, 2024
  • What Can You Do With an Economics Degree? July 19, 2024

Your passion. Our Programs.

Choose an area of study, select a degree level.

Search the site

Modal window with site-search and helpful links

Featured Programs

  • Business and Management
  • Computer Science
  • Teaching and Credentials

Helpful Links

  • Admissions & Application Information
  • Online College Degrees & Programs
  • Student Services
  • Request Your Transcripts

Terms & Conditions

By submitting your information to National University as my electronic signature and submitting this form by clicking the Request Info button above, I provide my express written consent to representatives of National University and National University affiliates (including City University of Seattle) to contact me about educational opportunities. This includes the use of automated technology, such as an automatic dialing system and pre-recorded or artificial voice messages, text messages, and mail, both electronic and physical, to the phone numbers (including cellular) and e-mail address(es) I have provided. I confirm that the information provided on this form is accurate and complete. I also understand that certain degree programs may not be available in all states. Message and data rates may apply. Message frequency may vary.

I understand that consent is not a condition to purchase any goods, services or property, and that I may withdraw my consent at any time by sending an email to [email protected] . I understand that if I am submitting my personal data from outside of the United States, I am consenting to the transfer of my personal data to, and its storage in, the United States, and I understand that my personal data will be subject to processing in accordance with U.S. laws, unless stated otherwise in our privacy policy . Please review our privacy policy for more details or contact us at [email protected] .

By submitting my information, I acknowledge that I have read and reviewed the Accessibility Statement . 

By submitting my information, I acknowledge that I have read and reviewed the Student Code of Conduct located in the Catalog .

National University

Chat Options

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMJ Glob Health
  • v.4(Suppl 1); 2019

Logo of bmjgh

Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods

1 School of Social Sciences, Bangor University, Wales, UK

Andrew Booth

2 School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Graham Moore

3 School of Social Sciences, Cardiff University, Wales, UK

Kate Flemming

4 Department of Health Sciences, The University of York, York, UK

Özge Tunçalp

5 Department of Reproductive Health and Research including UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Elham Shakibazadeh

6 Department of Health Education and Promotion, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran

Associated Data

bmjgh-2018-000893supp001.pdf

bmjgh-2018-000893supp002.pdf

bmjgh-2018-000893supp003.pdf

bmjgh-2018-000893supp005.pdf

bmjgh-2018-000893supp004.pdf

Guideline developers are increasingly dealing with more difficult decisions concerning whether to recommend complex interventions in complex and highly variable health systems. There is greater recognition that both quantitative and qualitative evidence can be combined in a mixed-method synthesis and that this can be helpful in understanding how complexity impacts on interventions in specific contexts. This paper aims to clarify the different purposes, review designs, questions, synthesis methods and opportunities to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of guidelines developed by WHO, which incorporated quantitative and qualitative evidence, are used to illustrate possible uses of mixed-method reviews and evidence. Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Consideration is given to the opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence. Recommendations are made concerning the future development of methods to better address questions in systematic reviews and guidelines that adopt a complexity perspective.

Summary box

  • When combined in a mixed-method synthesis, quantitative and qualitative evidence can potentially contribute to understanding how complex interventions work and for whom, and how the complex health systems into which they are implemented respond and adapt.
  • The different purposes and designs for combining quantitative and qualitative evidence in a mixed-method synthesis for a guideline process are described.
  • Questions relevant to gaining an understanding of the complexity of complex interventions and the wider health systems within which they are implemented that can be addressed by mixed-method syntheses are presented.
  • The practical methodological guidance in this paper is intended to help guideline producers and review authors commission and conduct mixed-method syntheses where appropriate.
  • If more mixed-method syntheses are conducted, guideline developers will have greater opportunities to access this evidence to inform decision-making.

Introduction

Recognition has grown that while quantitative methods remain vital, they are usually insufficient to address complex health systems related research questions. 1 Quantitative methods rely on an ability to anticipate what must be measured in advance. Introducing change into a complex health system gives rise to emergent reactions, which cannot be fully predicted in advance. Emergent reactions can often only be understood through combining quantitative methods with a more flexible qualitative lens. 2 Adopting a more pluralist position enables a diverse range of research options to the researcher depending on the research question being investigated. 3–5 As a consequence, where a research study sits within the multitude of methods available is driven by the question being asked, rather than any particular methodological or philosophical stance. 6

Publication of guidance on designing complex intervention process evaluations and other works advocating mixed-methods approaches to intervention research have stimulated better quality evidence for synthesis. 1 7–13 Methods for synthesising qualitative 14 and mixed-method evidence have been developed or are in development. Mixed-method research and review definitions are outlined in box 1 .

Defining mixed-method research and reviews

Pluye and Hong 52 define mixed-methods research as “a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results”.A mixed-method synthesis can integrate quantitative, qualitative and mixed-method evidence or data from primary studies.† Mixed-method primary studies are usually disaggregated into quantitative and qualitative evidence and data for the purposes of synthesis. Thomas and Harden further define three ways in which reviews are mixed. 53

  • The types of studies included and hence the type of findings to be synthesised (ie, qualitative/textual and quantitative/numerical).
  • The types of synthesis method used (eg, statistical meta-analysis and qualitative synthesis).
  • The mode of analysis: theory testing AND theory building.

*A qualitative study is one that uses qualitative methods of data collection and analysis to produce a narrative understanding of the phenomena of interest. Qualitative methods of data collection may include, for example, interviews, focus groups, observations and analysis of documents.

†The Cochrane Qualitative and Implementation Methods group coined the term ‘qualitative evidence synthesis’ to mean that the synthesis could also include qualitative data. For example, qualitative data from case studies, grey literature reports and open-ended questions from surveys. ‘Evidence’ and ‘data’ are used interchangeably in this paper.

This paper is one of a series that aims to explore the implications of complexity for systematic reviews and guideline development, commissioned by WHO. This paper is concerned with the methodological implications of including quantitative and qualitative evidence in mixed-method systematic reviews and guideline development for complex interventions. The guidance was developed through a process of bringing together experts in the field, literature searching and consensus building with end users (guideline developers, clinicians and reviewers). We clarify the different purposes, review designs, questions and synthesis methods that may be applicable to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of WHO guidelines that incorporated quantitative and qualitative evidence are used to illustrate possible uses of mixed-method reviews and mechanisms of integration ( table 1 , online supplementary files 1–3 ). Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process are presented. Specific considerations when using an evidence to decision framework such as the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) framework 15 or the new WHO-INTEGRATE evidence to decision framework 16 at the review design and evidence to decision stage are outlined. See online supplementary file 4 for an example of a health systems DECIDE framework and Rehfuess et al 16 for the new WHO-INTEGRATE framework. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence in guidelines of complex interventions that take a complexity perspective and health systems focus.

Designs and methods and their use or applicability in guidelines and systematic reviews taking a complexity perspective

Case study examples and referencesComplexity-related questions of interest in the guidelineTypes of synthesis used in the guidelineMixed-method review design and integration mechanismsObservations, concerns and considerations
A. Mixed-method review designs used in WHO guideline development
Antenatal Care (ANC) guidelines ( )
What do women in high-income, medium-income and low-income countries want and expect from antenatal care (ANC), based on their own accounts of their beliefs, views, expectations and experiences of pregnancy?Qualitative synthesis
Framework synthesis
Meta-ethnography

Quantitative and qualitative reviews undertaken separately (segregated), an initial scoping review of qualitative evidence established women’s preferences and outcomes for ANC, which informed design of the quantitative intervention review (contingent)
A second qualitative evidence synthesis was undertaken to look at implementation factors (sequential)
Integration: quantitative and qualitative findings were brought together in a series of DECIDE frameworks Tools included:
Psychological theory
SURE framework conceptual framework for implementing policy options
Conceptual framework for analysing integration of targeted health interventions into health systems to analyse contextual health system factors
An innovative approach to guideline development
No formal cross-study synthesis process and limited testing of theory. The hypothetical nature of meta-ethnography findings may be challenging for guideline panel members to process without additional training
See Flemming for considerations when selecting meta-ethnography
What are the evidence-based practices during ANC that improved outcomes and lead to positive pregnancy experience and how should these practices be delivered?Quantitative review of trials
Factors that influence the uptake of routine antenatal services by pregnant women
Views and experiences of maternity care providers
Qualitative synthesis
Framework synthesis
Meta-ethnography
Task shifting guidelines ( ) What are the effects of lay health worker interventions in primary and community healthcare on maternal and child health and the management of infectious diseases?Quantitative review of trials
Several published quantitative reviews were used (eg, Cochrane review of lay health worker interventions)
Additional new qualitative evidence syntheses were commissioned (segregated)

Integration: quantitative and qualitative review findings on lay health workers were brought together in several DECIDE frameworks. Tools included adapted SURE Framework and post hoc logic model
An innovative approach to guideline development
The post hoc logic model was developed after the guideline was completed
What factors affect the implementation of lay health worker programmes for maternal and child health?Qualitative evidence synthesis
Framework synthesis
Risk communication guideline ( ) Quantitative review of quantitative evidence (descriptive)
Qualitative using framework synthesis

A knowledge map of studies was produced to identify the method, topic and geographical spread of evidence. Reviews first organised and synthesised evidence by method-specific streams and reported method-specific findings. Then similar findings across method-specific streams were grouped and further developed using all the relevant evidence
Integration: where possible, quantitative and qualitative evidence for the same intervention and question was mapped against core DECIDE domains. Tools included framework using public health emergency model and disaster phases
Very few trials were identified. Quantitative and qualitative evidence was used to construct a high level view of what appeared to work and what happened when similar broad groups of interventions or strategies were implemented in different contexts
Example of a fully integrated mixed-method synthesis.
Without evidence of effect, it was highly challenging to populate a DECIDE framework
B. Mixed-method review designs that can be used in guideline development
Factors influencing children’s optimal fruit and vegetable consumption Potential to explore theoretical, intervention and implementation complexity issues
New question(s) of interest are developed and tested in a cross-study synthesis
Mixed-methods synthesis
Each review typically has three syntheses:
Statistical meta-analysis
Qualitative thematic synthesis
Cross-study synthesis

Aim is to generate and test theory from diverse body of literature
Integration: used integrative matrix based on programme theory
Can be used in a guideline process as it fits with the current model of conducting method specific reviews separately then bringing the review products together
C. Mixed-method review designs with the potential for use in guideline development
Interventions to promote smoke alarm ownership and function
Intervention effect and/or intervention implementation related questions within a systemNarrative synthesis (specifically Popay’s methodology)
Four stage approach to integrate quantitative (trials) with qualitative evidence
Integration: initial theory and logic model used to integrate evidence of effect with qualitative case summaries. Tools used included tabulation, groupings and clusters, transforming data: constructing a common rubric, vote-counting as a descriptive tool, moderator variables and subgroup analyses, idea webbing/conceptual mapping, creating qualitative case descriptions, visual representation of relationship between study characteristics and results
Few published examples with the exception of Rodgers, who reinterpreted a Cochrane review on the same topic with narrative synthesis methodology.
Methodology is complex. Most subsequent examples have only partially operationalised the methodology
An intervention effect review will still be required to feed into the guideline process
Factors affecting childhood immunisation
What factors explain complexity and causal pathways?Bayesian synthesis of qualitative and quantitative evidence
Aim is theory-testing by fusing findings from qualitative and quantitative research
Produces a set of weighted factors associated with/predicting the phenomenon under review
Not yet used in a guideline context.
Complex methodology.
Undergoing development and testing for a health context. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required
Providing effective and preferred care closer to home: a realist review of intermediate care. Developing and testing theories of change underpinning complex policy interventions
What works for whom in what contexts and how?
Realist synthesis
NB. Other theory-informed synthesis methods follow similar processes

Development of a theory from the literature, analysis of quantitative and qualitative evidence against the theory leads to development of context, mechanism and outcome chains that explain how outcomes come about
Integration: programme theory and assembling mixed-method evidence to create Context, Mechanism and Outcome (CMO) configurations
May be useful where there are few trials. The hypothetical nature of findings may be challenging for guideline panel members to process without additional training. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required
Use of morphine to treat cancer-related pain Any aspect of complexity could potentially be explored
How does the context of morphine use affect the established effectiveness of morphine?
Critical interpretive synthesis
Aims to generate theory from large and diverse body of literature
Segregated sequential design
Integration: integrative grid
There are few examples and the methodology is complex.
The hypothetical nature of findings may be challenging for guideline panel members to process without additional training.
The end product would need to be designed to feed into an evidence to decision framework and an intervention effect review will still be required
Food sovereignty, food security and health equity Examples have examined health system complexity
To understand the state of knowledge on relationships between health equity—ie, health inequalities that are socially produced—and food systems, where the concepts of 'food security' and 'food sovereignty' are prominent
Focused on eight pathways to health (in)equity through the food system: (1) Multi-Scalar Environmental, Social Context; (2) Occupational Exposures; (3) Environmental Change; (4) Traditional Livelihoods, Cultural Continuity; (5) Intake of Contaminants; (6) Nutrition; (7) Social Determinants of Health; (8) Political, Economic and Regulatory context
Meta-narrativeAim is to review research on diffusion of innovation to inform healthcare policy
Which research (or epistemic) traditions have considered this broad topic area?; How has each tradition conceptualised the topic (for example, including assumptions about the nature of reality, preferred study designs and ways of knowing)?; What theoretical approaches and methods did they use?; What are the main empirical findings?; and What insights can be drawn by combining and comparing findings from different traditions?
Integration: analysis leads to production of a set of meta-narratives (‘storylines of research’)
Not yet used in a guideline context. The originators are calling for meta-narrative reviews to be used in a guideline process.
Potential to provide a contextual overview within which to interpret other types of reviews in a guideline process. The meta-narrative review findings may require tailoring to ‘fit’ into an evidence to decision framework and an intervention effect review will still be required
Few published examples and the methodology is complex

Supplementary data

Taking a complexity perspective.

The first paper in this series 17 outlines aspects of complexity associated with complex interventions and health systems that can potentially be explored by different types of evidence, including synthesis of quantitative and qualitative evidence. Petticrew et al 17 distinguish between a complex interventions perspective and a complex systems perspective. A complex interventions perspective defines interventions as having “implicit conceptual boundaries, representing a flexible, but common set of practices, often linked by an explicit or implicit theory about how they work”. A complex systems perspective differs in that “ complexity arises from the relationships and interactions between a system’s agents (eg, people, or groups that interact with each other and their environment), and its context. A system perspective conceives the intervention as being part of the system, and emphasises changes and interconnections within the system itself”. Aspects of complexity associated with implementation of complex interventions in health systems that could potentially be addressed with a synthesis of quantitative and qualitative evidence are summarised in table 2 . Another paper in the series outlines criteria used in a new evidence to decision framework for making decisions about complex interventions implemented in complex systems, against which the need for quantitative and qualitative evidence can be mapped. 16 A further paper 18 that explores how context is dealt with in guidelines and reviews taking a complexity perspective also recommends using both quantitative and qualitative evidence to better understand context as a source of complexity. Mixed-method syntheses of quantitative and qualitative evidence can also help with understanding of whether there has been theory failure and or implementation failure. The Cochrane Qualitative and Implementation Methods Group provide additional guidance on exploring implementation and theory failure that can be adapted to address aspects of complexity of complex interventions when implemented in health systems. 19

Health-system complexity-related questions that a synthesis of quantitative and qualitative evidence could address (derived from Petticrew et al 17 )

Aspect of complexity of interestExamples of potential research question(s) that a synthesis of qualitative and quantitative evidence could addressTypes of studies or data that could contribute to a review of qualitative and quantitative evidence
What ‘is’ the system? How can it be described?What are the main influences on the health problem? How are they created and maintained? How do these influences interconnect? Where might one intervene in the system?Quantitative: previous systematic reviews of the causes of the problem); epidemiological studies (eg, cohort studies examining risk factors of obesity); network analysis studies showing the nature of social and other systems
Qualitative data: theoretical papers; policy documents
Interactions of interventions with context and adaptation Qualitative: (1) eg, qualitative studies; case studies
Quantitative: (2) trials or other effectiveness studies from different contexts; multicentre trials, with stratified reporting of findings; other quantitative studies that provide evidence of moderating effects of context
System adaptivity (how does the system change?)(How) does the system change when the intervention is introduced? Which aspects of the system are affected? Does this potentiate or dampen its effects?Quantitative: longitudinal data; possibly historical data; effectiveness studies providing evidence of differential effects across different contexts; system modelling (eg, agent-based modelling)
Qualitative: qualitative studies; case studies
Emergent propertiesWhat are the effects (anticipated and unanticipated) which follow from this system change?Quantitative: prospective quantitative evaluations; retrospective studies (eg, case–control studies, surveys) may also help identify less common effects; dose–response evaluations of impacts at aggregate level in individual studies or across studies included with systematic reviews (see suggested examples)
Qualitative: qualitative studies
Positive (reinforcing) and negative (balancing) feedback loopsWhat explains change in the effectiveness of the intervention over time?
Are the effects of an intervention are damped/suppressed by other aspects of the system (eg, contextual influences?)
Quantitative: studies of moderators of effectiveness; long-term longitudinal studies
Qualitative: studies of factors that enable or inhibit implementation of interventions
Multiple (health and non-health) outcomesWhat changes in processes and outcomes follow the introduction of this system change? At what levels in the system are they experienced?Quantitative: studies tracking change in the system over time
Qualitative: studies exploring effects of the change in individuals, families, communities (including equity considerations and factors that affect engagement and participation in change)

It may not be apparent which aspects of complexity or which elements of the complex intervention or health system can be explored in a guideline process, or whether combining qualitative and quantitative evidence in a mixed-method synthesis will be useful, until the available evidence is scoped and mapped. 17 20 A more extensive lead in phase is typically required to scope the available evidence, engage with stakeholders and to refine the review parameters and questions that can then be mapped against potential review designs and methods of synthesis. 20 At the scoping stage, it is also common to decide on a theoretical perspective 21 or undertake further work to refine a theoretical perspective. 22 This is also the stage to begin articulating the programme theory of the complex intervention that may be further developed to refine an understanding of complexity and show how the intervention is implemented in and impacts on the wider health system. 17 23 24 In practice, this process can be lengthy, iterative and fluid with multiple revisions to the review scope, often developing and adapting a logic model 17 as the available evidence becomes known and the potential to incorporate different types of review designs and syntheses of quantitative and qualitative evidence becomes better understood. 25 Further questions, propositions or hypotheses may emerge as the reviews progress and therefore the protocols generally need to be developed iteratively over time rather than a priori.

Following a scoping exercise and definition of key questions, the next step in the guideline development process is to identify existing or commission new systematic reviews to locate and summarise the best available evidence in relation to each question. For example, case study 2, ‘Optimising health worker roles for maternal and newborn health through task shifting’, included quantitative reviews that did and did not take an additional complexity perspective, and qualitative evidence syntheses that were able to explain how specific elements of complexity impacted on intervention outcomes within the wider health system. Further understanding of health system complexity was facilitated through the conduct of additional country-level case studies that contributed to an overall understanding of what worked and what happened when lay health worker interventions were implemented. See table 1 online supplementary file 2 .

There are a few existing examples, which we draw on in this paper, but integrating quantitative and qualitative evidence in a mixed-method synthesis is relatively uncommon in a guideline process. Box 2 includes a set of key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in mixed-methods design might ask. Subsequent sections provide more information and signposting to further reading to help address these key questions.

Key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in a mixed-methods design might ask

Compound questions requiring both quantitative and qualitative evidence?

Questions requiring mixed-methods studies?

Separate quantitative and qualitative questions?

Separate quantitative and qualitative research studies?

Related quantitative and qualitative research studies?

Mixed-methods studies?

Quantitative unpublished data and/or qualitative unpublished data, eg, narrative survey data?

Throughout the review?

Following separate reviews?

At the question point?

At the synthesis point?

At the evidence to recommendations stage?

Or a combination?

Narrative synthesis or summary?

Quantitising approach, eg, frequency analysis?

Qualitising approach, eg, thematic synthesis?

Tabulation?

Logic model?

Conceptual model/framework?

Graphical approach?

  • WHICH: Which mixed-method designs, methodologies and methods best fit into a guideline process to inform recommendations?

Complexity-related questions that a synthesis of quantitative and qualitative evidence can potentially address

Petticrew et al 17 define the different aspects of complexity and examples of complexity-related questions that can potentially be explored in guidelines and systematic reviews taking a complexity perspective. Relevant aspects of complexity outlined by Petticrew et al 17 are summarised in table 2 below, together with the corresponding questions that could be addressed in a synthesis combining qualitative and quantitative evidence. Importantly, the aspects of complexity and their associated concepts of interest have however yet to be translated fully in primary health research or systematic reviews. There are few known examples where selected complexity concepts have been used to analyse or reanalyse a primary intervention study. Most notable is Chandler et al 26 who specifically set out to identify and translate a set of relevant complexity theory concepts for application in health systems research. Chandler then reanalysed a trial process evaluation using selected complexity theory concepts to better understand the complex causal pathway in the health system that explains some aspects of complexity in table 2 .

Rehfeuss et al 16 also recommends upfront consideration of the WHO-INTEGRATE evidence to decision criteria when planning a guideline and formulating questions. The criteria reflect WHO norms and values and take account of a complexity perspective. The framework can be used by guideline development groups as a menu to decide which criteria to prioritise, and which study types and synthesis methods can be used to collect evidence for each criterion. Many of the criteria and their related questions can be addressed using a synthesis of quantitative and qualitative evidence: the balance of benefits and harms, human rights and sociocultural acceptability, health equity, societal implications and feasibility (see table 3 ). Similar aspects in the DECIDE framework 15 could also be addressed using synthesis of qualitative and quantitative evidence.

Integrate evidence to decision framework criteria, example questions and types of studies to potentially address these questions (derived from Rehfeuss et al 16 )

Domains of the WHO-INTEGRATE EtD frameworkExamples of potential research question(s) that a synthesis of qualitative and/or quantitative evidence could addressTypes of studies that could contribute to a review of qualitative and quantitative evidence
Balance of benefits and harmsTo what extent do patients/beneficiaries different health outcomes?Qualitative: studies of views and experiences
Quantitative: Questionnaire surveys
Human rights and sociocultural acceptabilityIs the intervention to patients/beneficiaries as well as to those implementing it?
To what extent do patients/beneficiaries different non-health outcomes?
How does the intervention affect an individual’s, population group’s or organisation’s , that is, their ability to make a competent, informed and voluntary decision?
Qualitative: discourse analysis, qualitative studies (ideally longitudinal to examine changes over time)
Quantitative: pro et contra analysis, discrete choice experiments, longitudinal quantitative studies (to examine changes over time), cross-sectional studies
Mixed-method studies; case studies
Health equity, equality and non-discriminationHow is the intervention for individuals, households or communities?
How —in terms of physical as well as informational access—is the intervention across different population groups?
Qualitative: studies of views and experiences
Quantitative: cross-sectional or longitudinal observational studies, discrete choice experiments, health expenditure studies; health system barrier studies, cross-sectional or longitudinal observational studies, discrete choice experiments, ethical analysis, GIS-based studies
Societal implicationsWhat is the of the intervention: are there features of the intervention that increase or reduce stigma and that lead to social consequences? Does the intervention enhance or limit social goals, such as education, social cohesion and the attainment of various human rights beyond health? Does it change social norms at individual or population level?
What is the of the intervention? Does it contribute to or limit the achievement of goals to protect the environment and efforts to mitigate or adapt to climate change?
Qualitative: studies of views and experiences
Quantitative: RCTs, quasi-experimental studies, comparative observational studies, longitudinal implementation studies, case studies, power analyses, environmental impact assessments, modelling studies
Feasibility and health system considerationsAre there any that impact on implementation of the intervention?
How might , such as past decisions and strategic considerations, positively or negatively impact the implementation of the intervention?
How does the intervention ? Is it likely to fit well or not, is it likely to impact on it in positive or negative ways?
How does the intervention interact with the need for and usage of the existing , at national and subnational levels?
How does the intervention interact with the need for and usage of the as well as other relevant infrastructure, at national and subnational levels?
Non-research: policy and regulatory frameworks
Qualitative: studies of views and experiences
Mixed-method: health systems research, situation analysis, case studies
Quantitative: cross-sectional studies

GIS, Geographical Information System; RCT, randomised controlled trial.

Questions as anchors or compasses

Questions can serve as an ‘anchor’ by articulating the specific aspects of complexity to be explored (eg, Is successful implementation of the intervention context dependent?). 27 Anchor questions such as “How does intervention x impact on socioeconomic inequalities in health behaviour/outcome x” are the kind of health system question that requires a synthesis of both quantitative and qualitative evidence and hence a mixed-method synthesis. Quantitative evidence can quantify the difference in effect, but does not answer the question of how . The ‘how’ question can be partly answered with quantitative and qualitative evidence. For example, quantitative evidence may reveal where socioeconomic status and inequality emerges in the health system (an emergent property) by exploring questions such as “ Does patterning emerge during uptake because fewer people from certain groups come into contact with an intervention in the first place? ” or “ are people from certain backgrounds more likely to drop out, or to maintain effects beyond an intervention differently? ” Qualitative evidence may help understand the reasons behind all of these mechanisms. Alternatively, questions can act as ‘compasses’ where a question sets out a starting point from which to explore further and to potentially ask further questions or develop propositions or hypotheses to explore through a complexity perspective (eg, What factors enhance or hinder implementation?). 27 Other papers in this series provide further guidance on developing questions for qualitative evidence syntheses and guidance on question formulation. 14 28

For anchor and compass questions, additional application of a theory (eg, complexity theory) can help focus evidence synthesis and presentation to explore and explain complexity issues. 17 21 Development of a review specific logic model(s) can help to further refine an initial understanding of any complexity-related issues of interest associated with a specific intervention, and if appropriate the health system or section of the health system within which to contextualise the review question and analyse data. 17 23–25 Specific tools are available to help clarify context and complex interventions. 17 18

If a complexity perspective, and certain criteria within evidence to decision frameworks, is deemed relevant and desirable by guideline developers, it is only possible to pursue a complexity perspective if the evidence is available. Careful scoping using knowledge maps or scoping reviews will help inform development of questions that are answerable with available evidence. 20 If evidence of effect is not available, then a different approach to develop questions leading to a more general narrative understanding of what happened when complex interventions were implemented in a health system will be required (such as in case study 3—risk communication guideline). This should not mean that the original questions developed for which no evidence was found when scoping the literature were not important. An important function of creating a knowledge map is also to identify gaps to inform a future research agenda.

Table 2 and online supplementary files 1–3 outline examples of questions in the three case studies, which were all ‘COMPASS’ questions for the qualitative evidence syntheses.

Types of integration and synthesis designs in mixed-method reviews

The shift towards integration of qualitative and quantitative evidence in primary research has, in recent years, begun to be mirrored within research synthesis. 29–31 The natural extension to undertaking quantitative or qualitative reviews has been the development of methods for integrating qualitative and quantitative evidence within reviews, and within the guideline process using evidence to decision-frameworks. Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.

One of the early approaches to integrating qualitative and quantitative evidence detailed by Sandelowski et al 32 advocated three basic review designs: segregated, integrated and contingent designs, which have been further developed by Heyvaert et al 33 ( box 3 ).

Segregated, integrated and contingent designs 32 33

Segregated design.

Conventional separate distinction between quantitative and qualitative approaches based on the assumption they are different entities and should be treated separately; can be distinguished from each other; their findings warrant separate analyses and syntheses. Ultimately, the separate synthesis results can themselves be synthesised.

Integrated design

The methodological differences between qualitative and quantitative studies are minimised as both are viewed as producing findings that can be readily synthesised into one another because they address the same research purposed and questions. Transformation involves either turning qualitative data into quantitative (quantitising) or quantitative findings are turned into qualitative (qualitising) to facilitate their integration.

Contingent design

Takes a cyclical approach to synthesis, with the findings from one synthesis informing the focus of the next synthesis, until all the research objectives have been addressed. Studies are not necessarily grouped and categorised as qualitative or quantitative.

A recent review of more than 400 systematic reviews 34 combining quantitative and qualitative evidence identified two main synthesis designs—convergent and sequential. In a convergent design, qualitative and quantitative evidence is collated and analysed in a parallel or complementary manner, whereas in a sequential synthesis, the collation and analysis of quantitative and qualitative evidence takes place in a sequence with one synthesis informing the other ( box 4 ). 6 These designs can be seen to build on the work of Sandelowski et al , 32 35 particularly in relation to the transformation of data from qualitative to quantitative (and vice versa) and the sequential synthesis design, with a cyclical approach to reviewing that evokes Sandelowski’s contingent design.

Convergent and sequential synthesis designs 34

Convergent synthesis design.

Qualitative and quantitative research is collected and analysed at the same time in a parallel or complementary manner. Integration can occur at three points:

a. Data-based convergent synthesis design

All included studies are analysed using the same methods and results presented together. As only one synthesis method is used, data transformation occurs (qualitised or quantised). Usually addressed one review question.

b. Results-based convergent synthesis design

Qualitative and quantitative data are analysed and presented separately but integrated using a further synthesis method; eg, narratively, tables, matrices or reanalysing evidence. The results of both syntheses are combined in a third synthesis. Usually addresses an overall review question with subquestions.

c. Parallel-results convergent synthesis design

Qualitative and quantitative data are analysed and presented separately with integration occurring in the interpretation of results in the discussion section. Usually addresses two or more complimentary review questions.

Sequential synthesis design

A two-phase approach, data collection and analysis of one type of evidence (eg, qualitative), occurs after and is informed by the collection and analysis of the other type (eg, quantitative). Usually addresses an overall question with subquestions with both syntheses complementing each other.

The three case studies ( table 1 , online supplementary files 1–3 ) illustrate the diverse combination of review designs and synthesis methods that were considered the most appropriate for specific guidelines.

Methods for conducting mixed-method reviews in the context of guidelines for complex interventions

In this section, we draw on examples where specific review designs and methods have been or can be used to explore selected aspects of complexity in guidelines or systematic reviews. We also identify other review methods that could potentially be used to explore aspects of complexity. Of particular note, we could not find any specific examples of systematic methods to synthesise highly diverse research designs as advocated by Petticrew et al 17 and summarised in tables 2 and 3 . For example, we could not find examples of methods to synthesise qualitative studies, case studies, quantitative longitudinal data, possibly historical data, effectiveness studies providing evidence of differential effects across different contexts, and system modelling studies (eg, agent-based modelling) to explore system adaptivity.

There are different ways that quantitative and qualitative evidence can be integrated into a review and then into a guideline development process. In practice, some methods enable integration of different types of evidence in a single synthesis, while in other methods, the single systematic review may include a series of stand-alone reviews or syntheses that are then combined in a cross-study synthesis. Table 1 provides an overview of the characteristics of different review designs and methods and guidance on their applicability for a guideline process. Designs and methods that have already been used in WHO guideline development are described in part A of the table. Part B outlines a design and method that can be used in a guideline process, and part C covers those that have the potential to integrate quantitative, qualitative and mixed-method evidence in a single review design (such as meta-narrative reviews and Bayesian syntheses), but their application in a guideline context has yet to be demonstrated.

Points of integration when integrating quantitative and qualitative evidence in guideline development

Depending on the review design (see boxes 3 and 4 ), integration can potentially take place at a review team and design level, and more commonly at several key points of the review or guideline process. The following sections outline potential points of integration and associated practical considerations when integrating quantitative and qualitative evidence in guideline development.

Review team level

In a guideline process, it is common for syntheses of quantitative and qualitative evidence to be done separately by different teams and then to integrate the evidence. A practical consideration relates to the organisation, composition and expertise of the review teams and ways of working. If the quantitative and qualitative reviews are being conducted separately and then brought together by the same team members, who are equally comfortable operating within both paradigms, then a consistent approach across both paradigms becomes possible. If, however, a team is being split between the quantitative and qualitative reviews, then the strengths of specialisation can be harnessed, for example, in quality assessment or synthesis. Optimally, at least one, if not more, of the team members should be involved in both quantitative and qualitative reviews to offer the possibility of making connexions throughout the review and not simply at re-agreed junctures. This mirrors O’Cathain’s conclusion that mixed-methods primary research tends to work only when there is a principal investigator who values and is able to oversee integration. 9 10 While the above decisions have been articulated in the context of two types of evidence, variously quantitative and qualitative, they equally apply when considering how to handle studies reporting a mixed-method study design, where data are usually disaggregated into quantitative and qualitative for the purposes of synthesis (see case study 3—risk communication in humanitarian disasters).

Question formulation

Clearly specified key question(s), derived from a scoping or consultation exercise, will make it clear if quantitative and qualitative evidence is required in a guideline development process and which aspects will be addressed by which types of evidence. For the remaining stages of the process, as documented below, a review team faces challenges as to whether to handle each type of evidence separately, regardless of whether sequentially or in parallel, with a view to joining the two products on completion or to attempt integration throughout the review process. In each case, the underlying choice is of efficiencies and potential comparability vs sensitivity to the underlying paradigm.

Once key questions are clearly defined, the guideline development group typically needs to consider whether to conduct a single sensitive search to address all potential subtopics (lumping) or whether to conduct specific searches for each subtopic (splitting). 36 A related consideration is whether to search separately for qualitative, quantitative and mixed-method evidence ‘streams’ or whether to conduct a single search and then identify specific study types at the subsequent sifting stage. These two considerations often mean a trade-off between a single search process involving very large numbers of records or a more protracted search process retrieving smaller numbers of records. Both approaches have advantages and choice may depend on the respective availability of resources for searching and sifting.

Screening and selecting studies

Closely related to decisions around searching are considerations relating to screening and selecting studies for inclusion in a systematic review. An important consideration here is whether the review team will screen records for all review types, regardless of their subsequent involvement (‘altruistic sifting’), or specialise in screening for the study type with which they are most familiar. The risk of missing relevant reports might be minimised by whole team screening for empirical reports in the first instance and then coding them for a specific quantitative, qualitative or mixed-methods report at a subsequent stage.

Assessment of methodological limitations in primary studies

Within a guideline process, review teams may be more limited in their choice of instruments to assess methodological limitations of primary studies as there are mandatory requirements to use the Cochrane risk of bias tool 37 to feed into Grading of Recommendations Assessment, Development and Evaluation (GRADE) 38 or to select from a small pool of qualitative appraisal instruments in order to apply GRADE; Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) 39 to assess the overall certainty or confidence in findings. The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods studies. 41 Other options include using corresponding instruments from within the same ‘stable’, for example, using different Critical Appraisal Skills Programme instruments. 42 While using instruments developed by the same team or organisation may achieve a degree of epistemological consonance, benefits may come more from consistency of approach and reporting rather than from a shared view of quality. Alternatively, a more paradigm-sensitive approach would involve selecting the best instrument for each respective review while deferring challenges from later heterogeneity of reporting.

Data extraction

The way in which data and evidence are extracted from primary research studies for review will be influenced by the type of integrated synthesis being undertaken and the review purpose. Initially, decisions need to be made regarding the nature and type of data and evidence that are to be extracted from the included studies. Method-specific reporting guidelines 43 44 provide a good template as to what quantitative and qualitative data it is potentially possible to extract from different types of method-specific study reports, although in practice reporting quality varies. Online supplementary file 5 provides a hypothetical example of the different types of studies from which quantitative and qualitative evidence could potentially be extracted for synthesis.

The decisions around what data or evidence to extract will be guided by how ‘integrated’ the mixed-method review will be. For those reviews where the quantitative and qualitative findings of studies are synthesised separately and integrated at the point of findings (eg, segregated or contingent approaches or sequential synthesis design), separate data extraction approaches will likely be used.

Where integration occurs during the process of the review (eg, integrated approach or convergent synthesis design), an integrated approach to data extraction may be considered, depending on the purpose of the review. This may involve the use of a data extraction framework, the choice of which needs to be congruent with the approach to synthesis chosen for the review. 40 45 The integrative or theoretical framework may be decided on a priori if a pre-developed theoretical or conceptual framework is available in the literature. 27 The development of a framework may alternatively arise from the reading of the included studies, in relation to the purpose of the review, early in the process. The Cochrane Qualitative and Implementation Methods Group provide further guidance on extraction of qualitative data, including use of software. 40

Synthesis and integration

Relatively few synthesis methods start off being integrated from the beginning, and these methods have generally been subject to less testing and evaluation particularly in a guideline context (see table 1 ). A review design that started off being integrated from the beginning may be suitable for some guideline contexts (such as in case study 3—risk communication in humanitarian disasters—where there was little evidence of effect), but in general if there are sufficient trials then a separate systematic review and meta-analysis will be required for a guideline. Other papers in this series offer guidance on methods for synthesising quantitative 46 and qualitative evidence 14 in reviews that take a complexity perspective. Further guidance on integrating quantitative and qualitative evidence in a systematic review is provided by the Cochrane Qualitative and Implementation Methods Group. 19 27 29 40 47

Types of findings produced by specific methods

It is highly likely (unless there are well-designed process evaluations) that the primary studies may not themselves seek to address the complexity-related questions required for a guideline process. In which case, review authors will need to configure the available evidence and transform the evidence through the synthesis process to produce explanations, propositions and hypotheses (ie, findings) that were not obvious at primary study level. It is important that guideline commissioners, developers and review authors are aware that specific methods are intended to produce a type of finding with a specific purpose (such as developing new theory in the case of meta-ethnography). 48 Case study 1 (antenatal care guideline) provides an example of how a meta-ethnography was used to develop a new theory as an end product, 48 49 as well as framework synthesis which produced descriptive and explanatory findings that were more easily incorporated into the guideline process. 27 The definitions ( box 5 ) may be helpful when defining the different types of findings.

Different levels of findings

Descriptive findings —qualitative evidence-driven translated descriptive themes that do not move beyond the primary studies.

Explanatory findings —may either be at a descriptive or theoretical level. At the descriptive level, qualitative evidence is used to explain phenomena observed in quantitative results, such as why implementation failed in specific circumstances. At the theoretical level, the transformed and interpreted findings that go beyond the primary studies can be used to explain the descriptive findings. The latter description is generally the accepted definition in the wider qualitative community.

Hypothetical or theoretical finding —qualitative evidence-driven transformed themes (or lines of argument) that go beyond the primary studies. Although similar, Thomas and Harden 56 make a distinction in the purposes between two types of theoretical findings: analytical themes and the product of meta-ethnographies, third-order interpretations. 48

Analytical themes are a product of interrogating descriptive themes by placing the synthesis within an external theoretical framework (such as the review question and subquestions) and are considered more appropriate when a specific review question is being addressed (eg, in a guideline or to inform policy). 56

Third-order interpretations come from translating studies into one another while preserving the original context and are more appropriate when a body of literature is being explored in and of itself with broader or emergent review questions. 48

Bringing mixed-method evidence together in evidence to decision (EtD) frameworks

A critical element of guideline development is the formulation of recommendations by the Guideline Development Group, and EtD frameworks help to facilitate this process. 16 The EtD framework can also be used as a mechanism to integrate and display quantitative and qualitative evidence and findings mapped against the EtD framework domains with hyperlinks to more detailed evidence summaries from contributing reviews (see table 1 ). It is commonly the EtD framework that enables the findings of the separate quantitative and qualitative reviews to be brought together in a guideline process. Specific challenges when populating the DECIDE evidence to decision framework 15 were noted in case study 3 (risk communication in humanitarian disasters) as there was an absence of intervention effect data and the interventions to communicate public health risks were context specific and varied. These problems would not, however, have been addressed by substitution of the DECIDE framework with the new INTEGRATE 16 evidence to decision framework. A d ifferent type of EtD framework needs to be developed for reviews that do not include sufficient evidence of intervention effect.

Mixed-method review and synthesis methods are generally the least developed of all systematic review methods. It is acknowledged that methods for combining quantitative and qualitative evidence are generally poorly articulated. 29 50 There are however some fairly well-established methods for using qualitative evidence to explore aspects of complexity (such as contextual, implementation and outcome complexity), which can be combined with evidence of effect (see sections A and B of table 1 ). 14 There are good examples of systematic reviews that use these methods to combine quantitative and qualitative evidence, and examples of guideline recommendations that were informed by evidence from both quantitative and qualitative reviews (eg, case studies 1–3). With the exception of case study 3 (risk communication), the quantitative and qualitative reviews for these specific guidelines have been conducted separately, and the findings subsequently brought together in an EtD framework to inform recommendations.

Other mixed-method review designs have potential to contribute to understanding of complex interventions and to explore aspects of wider health systems complexity but have not been sufficiently developed and tested for this specific purpose, or used in a guideline process (section C of table 1 ). Some methods such as meta-narrative reviews also explore different questions to those usually asked in a guideline process. Methods for processing (eg, quality appraisal) and synthesising the highly diverse evidence suggested in tables 2 and 3 that are required to explore specific aspects of health systems complexity (such as system adaptivity) and to populate some sections of the INTEGRATE EtD framework remain underdeveloped or in need of development.

In addition to the required methodological development mentioned above, there is no GRADE approach 38 for assessing confidence in findings developed from combined quantitative and qualitative evidence. Another paper in this series outlines how to deal with complexity and grading different types of quantitative evidence, 51 and the GRADE CERQual approach for qualitative findings is described elsewhere, 39 but both these approaches are applied to method-specific and not mixed-method findings. An unofficial adaptation of GRADE was used in the risk communication guideline that reported mixed-method findings. Nor is there a reporting guideline for mixed-method reviews, 47 and for now reports will need to conform to the relevant reporting requirements of the respective method-specific guideline. There is a need to further adapt and test DECIDE, 15 WHO-INTEGRATE 16 and other types of evidence to decision frameworks to accommodate evidence from mixed-method syntheses which do not set out to determine the statistical effects of interventions and in circumstances where there are no trials.

When conducting quantitative and qualitative reviews that will subsequently be combined, there are specific considerations for managing and integrating the different types of evidence throughout the review process. We have summarised different options for combining qualitative and quantitative evidence in mixed-method syntheses that guideline developers and systematic reviewers can choose from, as well as outlining the opportunities to integrate evidence at different stages of the review and guideline development process.

Review commissioners, authors and guideline developers generally have less experience of combining qualitative and evidence in mixed-methods reviews. In particular, there is a relatively small group of reviewers who are skilled at undertaking fully integrated mixed-method reviews. Commissioning additional qualitative and mixed-method reviews creates an additional cost. Large complex mixed-method reviews generally take more time to complete. Careful consideration needs to be given as to which guidelines would benefit most from additional qualitative and mixed-method syntheses. More training is required to develop capacity and there is a need to develop processes for preparing the guideline panel to consider and use mixed-method evidence in their decision-making.

This paper has presented how qualitative and quantitative evidence, combined in mixed-method reviews, can help understand aspects of complex interventions and the systems within which they are implemented. There are further opportunities to use these methods, and to further develop the methods, to look more widely at additional aspects of complexity. There is a range of review designs and synthesis methods to choose from depending on the question being asked or the questions that may emerge during the conduct of the synthesis. Additional methods need to be developed (or existing methods further adapted) in order to synthesise the full range of diverse evidence that is desirable to explore the complexity-related questions when complex interventions are implemented into health systems. We encourage review commissioners and authors, and guideline developers to consider using mixed-methods reviews and synthesis in guidelines and to report on their usefulness in the guideline development process.

Handling editor: Soumyadeep Bhaumik

Contributors: JN, AB, GM, KF, ÖT and ES drafted the manuscript. All authors contributed to paper development and writing and agreed the final manuscript. Anayda Portela and Susan Norris from WHO managed the series. Helen Smith was series Editor. We thank all those who provided feedback on various iterations.

Funding: Funding provided by the World Health Organization Department of Maternal, Newborn, Child and Adolescent Health through grants received from the United States Agency for International Development and the Norwegian Agency for Development Cooperation.

Disclaimer: ÖT is a staff member of WHO. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of WHO.

Competing interests: No financial interests declared. JN, AB and ÖT have an intellectual interest in GRADE CERQual; and JN has an intellectual interest in the iCAT_SR tool.

Patient consent: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: No additional data are available.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

  • Privacy Policy

Research Method

Home » Variables in Research – Definition, Types and Examples

Variables in Research – Definition, Types and Examples

Table of Contents

Variables in Research

Variables in Research

Definition:

In Research, Variables refer to characteristics or attributes that can be measured, manipulated, or controlled. They are the factors that researchers observe or manipulate to understand the relationship between them and the outcomes of interest.

Types of Variables in Research

Types of Variables in Research are as follows:

Independent Variable

This is the variable that is manipulated by the researcher. It is also known as the predictor variable, as it is used to predict changes in the dependent variable. Examples of independent variables include age, gender, dosage, and treatment type.

Dependent Variable

This is the variable that is measured or observed to determine the effects of the independent variable. It is also known as the outcome variable, as it is the variable that is affected by the independent variable. Examples of dependent variables include blood pressure, test scores, and reaction time.

Confounding Variable

This is a variable that can affect the relationship between the independent variable and the dependent variable. It is a variable that is not being studied but could impact the results of the study. For example, in a study on the effects of a new drug on a disease, a confounding variable could be the patient’s age, as older patients may have more severe symptoms.

Mediating Variable

This is a variable that explains the relationship between the independent variable and the dependent variable. It is a variable that comes in between the independent and dependent variables and is affected by the independent variable, which then affects the dependent variable. For example, in a study on the relationship between exercise and weight loss, the mediating variable could be metabolism, as exercise can increase metabolism, which can then lead to weight loss.

Moderator Variable

This is a variable that affects the strength or direction of the relationship between the independent variable and the dependent variable. It is a variable that influences the effect of the independent variable on the dependent variable. For example, in a study on the effects of caffeine on cognitive performance, the moderator variable could be age, as older adults may be more sensitive to the effects of caffeine than younger adults.

Control Variable

This is a variable that is held constant or controlled by the researcher to ensure that it does not affect the relationship between the independent variable and the dependent variable. Control variables are important to ensure that any observed effects are due to the independent variable and not to other factors. For example, in a study on the effects of a new teaching method on student performance, the control variables could include class size, teacher experience, and student demographics.

Continuous Variable

This is a variable that can take on any value within a certain range. Continuous variables can be measured on a scale and are often used in statistical analyses. Examples of continuous variables include height, weight, and temperature.

Categorical Variable

This is a variable that can take on a limited number of values or categories. Categorical variables can be nominal or ordinal. Nominal variables have no inherent order, while ordinal variables have a natural order. Examples of categorical variables include gender, race, and educational level.

Discrete Variable

This is a variable that can only take on specific values. Discrete variables are often used in counting or frequency analyses. Examples of discrete variables include the number of siblings a person has, the number of times a person exercises in a week, and the number of students in a classroom.

Dummy Variable

This is a variable that takes on only two values, typically 0 and 1, and is used to represent categorical variables in statistical analyses. Dummy variables are often used when a categorical variable cannot be used directly in an analysis. For example, in a study on the effects of gender on income, a dummy variable could be created, with 0 representing female and 1 representing male.

Extraneous Variable

This is a variable that has no relationship with the independent or dependent variable but can affect the outcome of the study. Extraneous variables can lead to erroneous conclusions and can be controlled through random assignment or statistical techniques.

Latent Variable

This is a variable that cannot be directly observed or measured, but is inferred from other variables. Latent variables are often used in psychological or social research to represent constructs such as personality traits, attitudes, or beliefs.

Moderator-mediator Variable

This is a variable that acts both as a moderator and a mediator. It can moderate the relationship between the independent and dependent variables and also mediate the relationship between the independent and dependent variables. Moderator-mediator variables are often used in complex statistical analyses.

Variables Analysis Methods

There are different methods to analyze variables in research, including:

  • Descriptive statistics: This involves analyzing and summarizing data using measures such as mean, median, mode, range, standard deviation, and frequency distribution. Descriptive statistics are useful for understanding the basic characteristics of a data set.
  • Inferential statistics : This involves making inferences about a population based on sample data. Inferential statistics use techniques such as hypothesis testing, confidence intervals, and regression analysis to draw conclusions from data.
  • Correlation analysis: This involves examining the relationship between two or more variables. Correlation analysis can determine the strength and direction of the relationship between variables, and can be used to make predictions about future outcomes.
  • Regression analysis: This involves examining the relationship between an independent variable and a dependent variable. Regression analysis can be used to predict the value of the dependent variable based on the value of the independent variable, and can also determine the significance of the relationship between the two variables.
  • Factor analysis: This involves identifying patterns and relationships among a large number of variables. Factor analysis can be used to reduce the complexity of a data set and identify underlying factors or dimensions.
  • Cluster analysis: This involves grouping data into clusters based on similarities between variables. Cluster analysis can be used to identify patterns or segments within a data set, and can be useful for market segmentation or customer profiling.
  • Multivariate analysis : This involves analyzing multiple variables simultaneously. Multivariate analysis can be used to understand complex relationships between variables, and can be useful in fields such as social science, finance, and marketing.

Examples of Variables

  • Age : This is a continuous variable that represents the age of an individual in years.
  • Gender : This is a categorical variable that represents the biological sex of an individual and can take on values such as male and female.
  • Education level: This is a categorical variable that represents the level of education completed by an individual and can take on values such as high school, college, and graduate school.
  • Income : This is a continuous variable that represents the amount of money earned by an individual in a year.
  • Weight : This is a continuous variable that represents the weight of an individual in kilograms or pounds.
  • Ethnicity : This is a categorical variable that represents the ethnic background of an individual and can take on values such as Hispanic, African American, and Asian.
  • Time spent on social media : This is a continuous variable that represents the amount of time an individual spends on social media in minutes or hours per day.
  • Marital status: This is a categorical variable that represents the marital status of an individual and can take on values such as married, divorced, and single.
  • Blood pressure : This is a continuous variable that represents the force of blood against the walls of arteries in millimeters of mercury.
  • Job satisfaction : This is a continuous variable that represents an individual’s level of satisfaction with their job and can be measured using a Likert scale.

Applications of Variables

Variables are used in many different applications across various fields. Here are some examples:

  • Scientific research: Variables are used in scientific research to understand the relationships between different factors and to make predictions about future outcomes. For example, scientists may study the effects of different variables on plant growth or the impact of environmental factors on animal behavior.
  • Business and marketing: Variables are used in business and marketing to understand customer behavior and to make decisions about product development and marketing strategies. For example, businesses may study variables such as consumer preferences, spending habits, and market trends to identify opportunities for growth.
  • Healthcare : Variables are used in healthcare to monitor patient health and to make treatment decisions. For example, doctors may use variables such as blood pressure, heart rate, and cholesterol levels to diagnose and treat cardiovascular disease.
  • Education : Variables are used in education to measure student performance and to evaluate the effectiveness of teaching strategies. For example, teachers may use variables such as test scores, attendance, and class participation to assess student learning.
  • Social sciences : Variables are used in social sciences to study human behavior and to understand the factors that influence social interactions. For example, sociologists may study variables such as income, education level, and family structure to examine patterns of social inequality.

Purpose of Variables

Variables serve several purposes in research, including:

  • To provide a way of measuring and quantifying concepts: Variables help researchers measure and quantify abstract concepts such as attitudes, behaviors, and perceptions. By assigning numerical values to these concepts, researchers can analyze and compare data to draw meaningful conclusions.
  • To help explain relationships between different factors: Variables help researchers identify and explain relationships between different factors. By analyzing how changes in one variable affect another variable, researchers can gain insight into the complex interplay between different factors.
  • To make predictions about future outcomes : Variables help researchers make predictions about future outcomes based on past observations. By analyzing patterns and relationships between different variables, researchers can make informed predictions about how different factors may affect future outcomes.
  • To test hypotheses: Variables help researchers test hypotheses and theories. By collecting and analyzing data on different variables, researchers can test whether their predictions are accurate and whether their hypotheses are supported by the evidence.

Characteristics of Variables

Characteristics of Variables are as follows:

  • Measurement : Variables can be measured using different scales, such as nominal, ordinal, interval, or ratio scales. The scale used to measure a variable can affect the type of statistical analysis that can be applied.
  • Range : Variables have a range of values that they can take on. The range can be finite, such as the number of students in a class, or infinite, such as the range of possible values for a continuous variable like temperature.
  • Variability : Variables can have different levels of variability, which refers to the degree to which the values of the variable differ from each other. Highly variable variables have a wide range of values, while low variability variables have values that are more similar to each other.
  • Validity and reliability : Variables should be both valid and reliable to ensure accurate and consistent measurement. Validity refers to the extent to which a variable measures what it is intended to measure, while reliability refers to the consistency of the measurement over time.
  • Directionality: Some variables have directionality, meaning that the relationship between the variables is not symmetrical. For example, in a study of the relationship between smoking and lung cancer, smoking is the independent variable and lung cancer is the dependent variable.

Advantages of Variables

Here are some of the advantages of using variables in research:

  • Control : Variables allow researchers to control the effects of external factors that could influence the outcome of the study. By manipulating and controlling variables, researchers can isolate the effects of specific factors and measure their impact on the outcome.
  • Replicability : Variables make it possible for other researchers to replicate the study and test its findings. By defining and measuring variables consistently, other researchers can conduct similar studies to validate the original findings.
  • Accuracy : Variables make it possible to measure phenomena accurately and objectively. By defining and measuring variables precisely, researchers can reduce bias and increase the accuracy of their findings.
  • Generalizability : Variables allow researchers to generalize their findings to larger populations. By selecting variables that are representative of the population, researchers can draw conclusions that are applicable to a broader range of individuals.
  • Clarity : Variables help researchers to communicate their findings more clearly and effectively. By defining and categorizing variables, researchers can organize and present their findings in a way that is easily understandable to others.

Disadvantages of Variables

Here are some of the main disadvantages of using variables in research:

  • Simplification : Variables may oversimplify the complexity of real-world phenomena. By breaking down a phenomenon into variables, researchers may lose important information and context, which can affect the accuracy and generalizability of their findings.
  • Measurement error : Variables rely on accurate and precise measurement, and measurement error can affect the reliability and validity of research findings. The use of subjective or poorly defined variables can also introduce measurement error into the study.
  • Confounding variables : Confounding variables are factors that are not measured but that affect the relationship between the variables of interest. If confounding variables are not accounted for, they can distort or obscure the relationship between the variables of interest.
  • Limited scope: Variables are defined by the researcher, and the scope of the study is therefore limited by the researcher’s choice of variables. This can lead to a narrow focus that overlooks important aspects of the phenomenon being studied.
  • Ethical concerns: The selection and measurement of variables may raise ethical concerns, especially in studies involving human subjects. For example, using variables that are related to sensitive topics, such as race or sexuality, may raise concerns about privacy and discrimination.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Interval Variable

Interval Variable – Definition, Purpose and...

Moderating Variable

Moderating Variable – Definition, Analysis...

Continuous Variable

Continuous Variable – Definition, Types and...

Composite Variable

Composite Variable – Definition, Types and...

Dependent Variable

Dependent Variable – Definition, Types and...

Attribute

Attribute – Meanings, Definition and Examples

Educational resources and simple solutions for your research journey

What is Descriptive Research? Definition, Methods, Types and Examples

What is Descriptive Research? Definition, Methods, Types and Examples

Descriptive research is a methodological approach that seeks to depict the characteristics of a phenomenon or subject under investigation. In scientific inquiry, it serves as a foundational tool for researchers aiming to observe, record, and analyze the intricate details of a particular topic. This method provides a rich and detailed account that aids in understanding, categorizing, and interpreting the subject matter.

Descriptive research design is widely employed across diverse fields, and its primary objective is to systematically observe and document all variables and conditions influencing the phenomenon.

After this descriptive research definition, let’s look at this example. Consider a researcher working on climate change adaptation, who wants to understand water management trends in an arid village in a specific study area. She must conduct a demographic survey of the region, gather population data, and then conduct descriptive research on this demographic segment. The study will then uncover details on “what are the water management practices and trends in village X.” Note, however, that it will not cover any investigative information about “why” the patterns exist.

Table of Contents

What is descriptive research?

If you’ve been wondering “What is descriptive research,” we’ve got you covered in this post! In a nutshell, descriptive research is an exploratory research method that helps a researcher describe a population, circumstance, or phenomenon. It can help answer what , where , when and how questions, but not why questions. In other words, it does not involve changing the study variables and does not seek to establish cause-and-effect relationships.

quantitative research environment meaning

Importance of descriptive research

Now, let’s delve into the importance of descriptive research. This research method acts as the cornerstone for various academic and applied disciplines. Its primary significance lies in its ability to provide a comprehensive overview of a phenomenon, enabling researchers to gain a nuanced understanding of the variables at play. This method aids in forming hypotheses, generating insights, and laying the groundwork for further in-depth investigations. The following points further illustrate its importance:

Provides insights into a population or phenomenon: Descriptive research furnishes a comprehensive overview of the characteristics and behaviors of a specific population or phenomenon, thereby guiding and shaping the research project.

Offers baseline data: The data acquired through this type of research acts as a reference for subsequent investigations, laying the groundwork for further studies.

Allows validation of sampling methods: Descriptive research validates sampling methods, aiding in the selection of the most effective approach for the study.

Helps reduce time and costs: It is cost-effective and time-efficient, making this an economical means of gathering information about a specific population or phenomenon.

Ensures replicability: Descriptive research is easily replicable, ensuring a reliable way to collect and compare information from various sources.

When to use descriptive research design?

Determining when to use descriptive research depends on the nature of the research question. Before diving into the reasons behind an occurrence, understanding the how, when, and where aspects is essential. Descriptive research design is a suitable option when the research objective is to discern characteristics, frequencies, trends, and categories without manipulating variables. It is therefore often employed in the initial stages of a study before progressing to more complex research designs. To put it in another way, descriptive research precedes the hypotheses of explanatory research. It is particularly valuable when there is limited existing knowledge about the subject.

Some examples are as follows, highlighting that these questions would arise before a clear outline of the research plan is established:

  • In the last two decades, what changes have occurred in patterns of urban gardening in Mumbai?
  • What are the differences in climate change perceptions of farmers in coastal versus inland villages in the Philippines?

Characteristics of descriptive research

Coming to the characteristics of descriptive research, this approach is characterized by its focus on observing and documenting the features of a subject. Specific characteristics are as below.

  • Quantitative nature: Some descriptive research types involve quantitative research methods to gather quantifiable information for statistical analysis of the population sample.
  • Qualitative nature: Some descriptive research examples include those using the qualitative research method to describe or explain the research problem.
  • Observational nature: This approach is non-invasive and observational because the study variables remain untouched. Researchers merely observe and report, without introducing interventions that could impact the subject(s).
  • Cross-sectional nature: In descriptive research, different sections belonging to the same group are studied, providing a “snapshot” of sorts.
  • Springboard for further research: The data collected are further studied and analyzed using different research techniques. This approach helps guide the suitable research methods to be employed.

Types of descriptive research

There are various descriptive research types, each suited to different research objectives. Take a look at the different types below.

  • Surveys: This involves collecting data through questionnaires or interviews to gather qualitative and quantitative data.
  • Observational studies: This involves observing and collecting data on a particular population or phenomenon without influencing the study variables or manipulating the conditions. These may be further divided into cohort studies, case studies, and cross-sectional studies:
  • Cohort studies: Also known as longitudinal studies, these studies involve the collection of data over an extended period, allowing researchers to track changes and trends.
  • Case studies: These deal with a single individual, group, or event, which might be rare or unusual.
  • Cross-sectional studies : A researcher collects data at a single point in time, in order to obtain a snapshot of a specific moment.
  • Focus groups: In this approach, a small group of people are brought together to discuss a topic. The researcher moderates and records the group discussion. This can also be considered a “participatory” observational method.
  • Descriptive classification: Relevant to the biological sciences, this type of approach may be used to classify living organisms.

Descriptive research methods

Several descriptive research methods can be employed, and these are more or less similar to the types of approaches mentioned above.

  • Surveys: This method involves the collection of data through questionnaires or interviews. Surveys may be done online or offline, and the target subjects might be hyper-local, regional, or global.
  • Observational studies: These entail the direct observation of subjects in their natural environment. These include case studies, dealing with a single case or individual, as well as cross-sectional and longitudinal studies, for a glimpse into a population or changes in trends over time, respectively. Participatory observational studies such as focus group discussions may also fall under this method.

Researchers must carefully consider descriptive research methods, types, and examples to harness their full potential in contributing to scientific knowledge.

Examples of descriptive research

Now, let’s consider some descriptive research examples.

  • In social sciences, an example could be a study analyzing the demographics of a specific community to understand its socio-economic characteristics.
  • In business, a market research survey aiming to describe consumer preferences would be a descriptive study.
  • In ecology, a researcher might undertake a survey of all the types of monocots naturally occurring in a region and classify them up to species level.

These examples showcase the versatility of descriptive research across diverse fields.

Advantages of descriptive research

There are several advantages to this approach, which every researcher must be aware of. These are as follows:

  • Owing to the numerous descriptive research methods and types, primary data can be obtained in diverse ways and be used for developing a research hypothesis .
  • It is a versatile research method and allows flexibility.
  • Detailed and comprehensive information can be obtained because the data collected can be qualitative or quantitative.
  • It is carried out in the natural environment, which greatly minimizes certain types of bias and ethical concerns.
  • It is an inexpensive and efficient approach, even with large sample sizes

Disadvantages of descriptive research

On the other hand, this design has some drawbacks as well:

  • It is limited in its scope as it does not determine cause-and-effect relationships.
  • The approach does not generate new information and simply depends on existing data.
  • Study variables are not manipulated or controlled, and this limits the conclusions to be drawn.
  • Descriptive research findings may not be generalizable to other populations.
  • Finally, it offers a preliminary understanding rather than an in-depth understanding.

To reiterate, the advantages of descriptive research lie in its ability to provide a comprehensive overview, aid hypothesis generation, and serve as a preliminary step in the research process. However, its limitations include a potential lack of depth, inability to establish cause-and-effect relationships, and susceptibility to bias.

Frequently asked questions

When should researchers conduct descriptive research.

Descriptive research is most appropriate when researchers aim to portray and understand the characteristics of a phenomenon without manipulating variables. It is particularly valuable in the early stages of a study.

What is the difference between descriptive and exploratory research?

Descriptive research focuses on providing a detailed depiction of a phenomenon, while exploratory research aims to explore and generate insights into an issue where little is known.

What is the difference between descriptive and experimental research?

Descriptive research observes and documents without manipulating variables, whereas experimental research involves intentional interventions to establish cause-and-effect relationships.

Is descriptive research only for social sciences?

No, various descriptive research types may be applicable to all fields of study, including social science, humanities, physical science, and biological science.

How important is descriptive research?

The importance of descriptive research lies in its ability to provide a glimpse of the current state of a phenomenon, offering valuable insights and establishing a basic understanding. Further, the advantages of descriptive research include its capacity to offer a straightforward depiction of a situation or phenomenon, facilitate the identification of patterns or trends, and serve as a useful starting point for more in-depth investigations. Additionally, descriptive research can contribute to the development of hypotheses and guide the formulation of research questions for subsequent studies.

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

Back to school 2024 sale

Back to School – Lock-in All Access Pack for a Year at the Best Price

research-paper-appendix

Research Paper Appendix: Format and Examples

Quantitative Research Method

Olasile Babatunde Adedoyin at University of Kyrenia

  • University of Kyrenia

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Fennie Mantula

  • Amos C. Mpofu

Favourate Sebele

  • Maelyn L. Salova
  • Jenica R. Samperoy
  • Hazel Mae C. Llorente

Amos Mpofu

  • Jeremiah Calisang
  • Cathy Cabangcala

Syeda Umme Habiba

  • Mohammed Javad Hesam
  • Nasser Sulaiman
  • Ms Badriya Al Balushi

Jabhisile Maphumulo

  • Andi Dyna Riana
  • Retno Fitrianti

Patrick Louie Jay Ramirez Federizo

  • Briar Loi M. Adora
  • Andrei Brent C. Bautista
  • Gideon Sanders B. Tarun
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Open access
  • Published: 26 August 2024

Assessment of the ‘students’ perceptions of education using Dundee Ready Environment Educational Measure (DREEM) inventory at Princess Nora bint Abdulrahman University, Saudi Arabia

  • Latefa Hamad Al Fryan 1 ,
  • Mahasin Ibrahim Shomo 2 &
  • Ibrahim A. Bani   ORCID: orcid.org/0000-0003-0566-1740 3 , 4  

BMC Medical Education volume  24 , Article number:  928 ( 2024 ) Cite this article

115 Accesses

Metrics details

Educational settings in professional health education establishments significantly shape students' academic, social, and emotional experiences. These environments encompass physical, psychological, and social infrastructures of programs or institutions, which jointly influence learning and development. This study analyzed the educational environment at Prince Nora University in Saudi Arabia, a renowned institution in health education.

The primary aim of this study was to evaluate the perceptions of the educational environment among students at Prince Nora University using the Dundee Ready Education Environment Measure (DREEM) inventory. The DREEM inventory is a renowned and validated tool designed to gauge students' perceptions across various dimensions of their educational experience.

Employing a cross-sectional survey design, the research gathered data from a sample of 321 students enrolled in the College of Health and Rehabilitation Sciences at Prince Nord University. The DREEM inventory, which measures the academic, social, and emotional aspects of the learning environment from the student's perspective, was utilized to collect the information.

The findings from the study indicated a positive perception of the educational environment among the students, with an overall mean score of 113.84 out of 200 on the DREEM inventory. Analysis of the subscales revealed that the Student Perceptions of Atmosphere (SPoA) received the highest scores, indicating a favourable environment, while Student Social Self-Perceptions (SSSP) scored the lowest, suggesting areas that may require attention and improvement.

The study successfully showed the utility of the DREEM inventory in assessing the educational environment at Prince Nora University, highlighting its effectiveness as a tool for understanding student perceptions. The positive overall score suggests a conducive learning atmosphere, though the disparity in subscale scores points to potential areas for enhancement. Recommendation: The research suggests that Saudi Arabian universities should implement the DREEM inventory to assess and enhance their educational settings, ultimately delivering a comprehensive and nurturing learning experience for students .

Peer Review reports

Introduction

Professional health education classrooms are a hotbed of academic curiosity worldwide [ 1 ]. The programs' or institutions physical, social, psychological, and other infrastructures make up what they collectively refer to as the educational environment [ 2 ]. It also includes the mindset and actions of the teachers and strategies they employ to convey the course content to the students. It also includes the style of the curriculum employed and the instructional methods employed. The educational environment comprises everything that impacts instruction and study [ 3 ].

Many criteria recognize the traits and characteristics of a learning environment; program rules, governance structures, and other features may also be called educational environment elements [ 4 ]. It is agreed that the school climate significantly affects student achievement [ 5 ]. Thus, it has become clear that reviewing school settings is vital to ensuring that all students receive well-rounded education [ 6 ]. The National Commission for Accreditation and Assessment (NCAAA) in Saudi Arabia prioritizes the quality of the learning environment when conducting program evaluations. Thus, the NCAAA ensures the quality and standardization of higher education institutions and programs [ 7 ]. The establishment of the National Commission for Academic Accreditation and Assessment (NCAAA) in Saudi Arabia is aimed at setting standards, accrediting institutions, and improving the quality of higher education programs [ 8 ]. It strives to guarantee the effectiveness and comprehensiveness of educational and training programs while also assessing their impact on the national economy and development [ 8 ]. The NCAAA strives to establish trust in local and global communities regarding the outcomes of these institutions and programs [ 8 ]. While the NCAAA has only recently incorporated postgraduate programs into its plans, it now allows universities to apply for accreditation of these programs [ 9 ]. The NCAAA's efforts to accredit undergraduate programs have shown noteworthy progress, but its role in accrediting postgraduate programs is still in progress and may not yet provide the same level of assurance as it does for undergraduate programs.

Recent changes to health professional education (HPE) curricula, such as incorporating new teaching and learning practices and evaluation methods and the growing diversity of today's student body, have increased the urgency of reviewing the current state of health-related education [ 10 ]. There is a strong correlation between how students feel about their classrooms and their academic performance. Student data should inform decisions regarding the curriculum, instructional methods, and school infrastructure [ 11 ]. Educators believe that students' exposure to both classroom and clinical settings significantly impacts their development of attitudes, knowledge, abilities, and behaviors as they move through the medical school curriculum [ 12 ].

The quality of educational settings has been evaluated using several tools developed and deployed in recent years [ 13 , 14 , 15 , 16 , 17 , 18 ]. In 1997, a team of medical educators from Dundee University created a reliable instrument for assessing the setting and culture of HPE. Its official title is the Dundee Ready Environment Educational Measure (DREEM). It is widely accepted as a culturally neutral, credible, and accurate assessment tool for gauging the climate of undergraduate HPE programs. The quantitative and qualitative methodologies used to design this instrument are the basis for its widespread acceptance [ 1 ]. The current sample size employed in the assessment of students' perceptions of education using the DREEM inventory at the College of Education, Princess Nora University, Saudi Arabia, was deemed adequate and representative, ensuring the generalizability of the findings to the broader student population. This study's robust sampling methodology and consideration of diverse academic levels contribute to its representativeness and reliability.

Despite the potential value of applying DREEM to the analysis of HPE problems, this tool is not commonly used in health-related programs in Saudi Arabia. DREEM's application of DREEM to the investigation of HPE problems could be very beneficial, but it is not widely used in SA health-related activities. Thus, this study investigated how students in the health professions at Princess Noura University (PNU), an all-female institution in SA, felt about their classroom environment.

This cross-sectional study began on September 28, 2022, for 2 months at the PNU, SA. Faculty of Health and Rehabilitation Sciences. When it opened in 2008, they knew it as the "College of Physiotherapy." It offers only one major: physiotherapy. After then, in 2012, a reorganization led to the college's name changing from "College of Physiotherapy" to "College of Health and Rehabilitation Sciences." Department of Health Sciences, Department of Communication Sciences, Department of Rehabilitation Sciences, Department of Radiology. Thirteen different programs are available and spread across four different departments. These are only available for women.

Sample size

Following Cochran [ 19 ], the optimal sample size was calculated using the following formula ( 1 ):

where n is the number of observations, is the fraction of expected levels (in this case, the DREEM response), Z is the standard deviation for a 95% confidence interval, and d is the intended margin of error. Given that the author could not know how numerous students would respond in each category on the DREEM, we had to use a cutoff of 50% of the possible total scores. π = 0.5, d = 0.05, and z = 1.96. With a 10% non-response rate and finite sample correction factor in mind, the minimum number of participants was 400. It then split the computed sample size across schools based on the relative student population.

Data collection

In September 2021, the College of Education at Princess Nora University in Saudi Arabia was considered to be one of the world's largest educational institutions for women. This university is located in Riyadh, the bustling Saudi capital, and houses thousands of female students. It provides ample opportunities for practical learning experiences in real-world classroom settings in a rich academic and cultural environment. Through research and innovation, the institution aimed to prepare its students to make positive contributions to Saudi Arabia's educational system. Verifying the latest statistics and developments is advisable before relying on the provided information. The participants' responses were gathered using the standard, industry-wide-applied questionnaire, the DREEM. The DREEM inventory consists of 50 items divided into the following five scales:

Twelve components (1, 7, 13, 16, 20, 22, 24, 25, 38, 44, 47, and 48) with a maximum score of 48 make up students' perceptions of learning (SPoL), with scores being interpreted as follows: The results ranged from 0 to 48, with 0 being the worst, 13 to 24 being poor, 25 to 36 being negative, and 37 to 48 being very thought-out in the classroom.

There are 11 items on the Students' Perceptions of Teachers (SPoT) survey (No. 2, 6, 8, 9, 18, 29, 32, 37, 39, 40, and 50) and a maximum score of 44, with the results interpreted as follows: From lowest to highest, the lowest was teachers with a score of 0–11, the lowest was those with a score of 12–22, the middle was those with a score of 23–33, and the highest was those with a score of 34–44, who were deemed excellent educators.

There is a maximum score of 32 on students' academic self-perceptions (SASP), which consists of 8 items (Item No. Items: 2, 10, 22, 26, 27, 31, 41, and 45). Scores of 0–8 were regarded as a sense of absolute failure, 9–16 as a sense of many negative characteristics, 17–24 as a sense of learning more toward the positive, and 25–32 as a sense of confidence.

12 elements (items No. Items 11, 12, 17, 23, 30, 33, 34, 35, 36, 42, 43, and 50) make up the SPoA scale, which can be scored from 0 to 48. From 0 to 12, 13 to 24, 25 to 36, 37 to 48, and 48 and above, the environment was rated as poor, with many problems that needed to be fixed, a more positive atmosphere, and a nice feeling overall.

The Student Social Self-Perceptions (SSSP) contains 7 questions with a maximum score of 28 (questions 3, 4, 14, 15, 19, 28, and 46) and can be interpreted as follows : from 0 to 7, it was awful; from 8 to 14, it was not lovely; from 15 to 21, it was not too bad; and from 22 to 28, it was socially perfect.

After reading each item, students were to rate it on a five-point Likert scale from "strongly agree" to "strongly disagree." The items were rated as follows: a score of 4 indicated complete agreement, 3 indicated moderate agreement, 2 indicated uncertainty, 1 indicated disagreement, and 0 indicated significant disagreement. Altogether, the scale adds up to a total score of 200. In this study, the authors assessed 9 items initially rated negatively (4, 8, 9, 17, 25, 35, 39, 48, and 50). A perfect educational environment received a score of 200 on the original DREEM. A higher number reflects a more favorable rating. Authors have rated each item on a five-point Likert scale from "strongly agree" to "strongly disagree" and interpreted a score of 4 as a total score in the following format: A score between 0 and 50 suggests an impoverished educational environment, a score between 51 and 100 shows many problems, a score between 101 and 150 indicates more positive than negative, and a score between 151–200 indicates an excellent educational environment [ 1 ]. In this study, it was determined that areas with individual items with a mean score of 3.5 or higher are vital, areas with individual items with a mean score of 2.0 or lower require attention, and areas with individual items with mean scores between 2 and 3 are areas of the educational environment that have room for improvement [ 20 ]. Google Survey was used to conduct electronic surveys with students through a web-based program. The students received a link to the questionnaire via an airdrop.

Data analysis

We used the Statistical Package for the Social Sciences (SPSS) for data entry, verification, and analysis (SPSS) (version 25; SPSS Inc. Chicago, IL, USA). Descriptive statistics and inferential methods were used to analyze the data. Comparisons of group means were made using frequency analyses and basic tabulation. It also compares the meaning of the ‘subgroup. The scores were compared on a college-specific basis by using an independent Student’s t-test. To determine whether there was a statistically significant difference between the various cohorts of students defined by academic year and academic program, Kruskal–Wallis analysis was performed. Statistically, a significant result was defined as one with a probability level of less than 0.05.

In this study, approximately 321 students successfully completed the online survey, or 80.3% of the estimated sample size. The average age of these respondents was 22.2 ± 2.001. The majority were located in the Health Sciences Division ( n  = 133/321) and Radiology Sciences Division ( n  = 108/321) (41.4% and 33.6%, respectively). Table 1 shows the distribution of students by year in school. Participation rates were relatively consistent across all years except for the first, in which only 3.4% ( n  = 11/321) of the children were present.

The overall DREEM score for this investigation was 113.84 35.187, which suggests that the educational environment is more favorable than unfavorable. Similarly, the SPoL, SPoT, SASP, SPoA, and SSSP are listed in Table  2 . This explains the items in each category in Table  3 .

Kruskal–Wallis was run with background characteristics as independent variables and DREAM domains as dependent variables to determine whether there was a correlation between the two data sets. Table 4 indicate no statistically significant differences between the five DREAM domains and the demographic information of the students who took the survey.

The quality of an educational institution is crucial for achieving its HPE program goals [ 21 ]. Therefore, this research aimed to assess the school climate perceived by PNU and SA women majoring in health sciences. The survey also sought to identify opinion discrepancies between departments and students of varying ages. The researchers employed the DREEM inventory, which is widely considered the best tool for measuring the educational environment of undergraduate HPE institutions [ 22 ].

According to Gruppen et al. [ 23 ], the quality of an educational institution is intimately tied to the success of any HPE programme. Consequently, this study aimed to investigate the perspectives of female students majoring in health Sciences at PNU in South Africa regarding the university environment [ 24 ]. This study aimed to determine several factors, including whether there were substantial disparities in opinions between various departments or among students of varying ages. The researchers used the DREEM inventory, which is generally regarded as the best approach for measuring the educational environment of undergraduate HPE institutions [ 22 ].

Overall scores

The mean DREEM score in this analysis was 113.84 ± 35.187, suggesting that participants were more likely to have a favorable impression of their school's environment than a negative one [ 10 ]. The results of several other Saudi academic institutions corroborate our findings. Global DREEM test results from King Khalid University [ 25 ], Qassim University [ 26 ], King Fahad Medical City [ 27 ], Tabuk University [ 28 ], Jazan University [ 29 ], and King Abdul Aziz University [ 30 ]were (102, 112.9, 112, 111.5, 105, 104.9, 102) respectively.

Comparing these scores, it can be observed that some institutions, including those in the present study (113.84), had DREEM scores higher than 100, indicating a generally positive perception of the educational environment. King Khalid University, King Fahad Medical City, Tabuk University, and Jazan University also had scores above 100, reflecting favorable perceptions among their students. On the other hand, Qassim University and King Abdul Aziz University had scores below 100, suggesting some areas for improvement in their educational environments as perceived by their students.

It is important to note that the DREEM inventory is a valuable tool for assessing various facets of the educational environment, and its application in multiple institutions helps to identify patterns and trends in students' perceptions. These scores can guide administrators and policymakers to understand the strengths and weaknesses of their educational settings and enable them to make informed decisions to enhance their overall learning experience. By benchmarking their scores against other institutions, universities can gain valuable insights and potentially implement best practices from those with higher DREEM scores to improve their educational landscape.

This was almost consistent with the results of a study conducted at a different university in the U.K.. (139) [ 31 ], Sudan (130) [ 32 ], Nepal (130) [ 20 ], Malaysia (125.3) [ 33 ], Nigeria (118) [ 20 ], Turkey (117.6) [ 34 ], India (117) [ 34 ], and Sri Lanka (108) [ 35 ]. The DREEM inventory subscale analysis is a valuable application. This reveals the benefits and drawbacks of the current school system. With scores of 27.2 on the SPoL scale, 23.1 on the SPoT scale, 18.3 on the SASP scale, 27.3 on the SPoA scale, and 15.7 on the SSSP scale, it is clear found that there were more positives than negatives on the educational environment. These results are consistent with those found in research performed at other SA universities such as Jazan University [ 29 ], Qassim University [ 26 ], and King Khalid University [ 25 ].

Among the DREEM scores, those from various universities in different countries, including the current study, had a DREEM score of 139. Moreover, the study provides DREEM scores from other countries ranging from 130 to 108, as well as the value of the subscale analysis of the DREEM inventory. In the U.K. study, students scored 139, indicating a very positive perception of the educational environment. In addition to Sudan and Nepal, both scored 130, indicating favorable impressions of their respective educational settings. There were also positive perceptions among students in Malaysia, Nigeria, Turkey, India, and Sri Lanka, with scores ranging from 108 to 125.3.

The DREEM score for the current study is "almost at the same level" as that in the U.K. study, even though this is not explicitly stated. The exact DREEM score for this study is not provided, but we can assume that it reflects a positive perception of the educational environment, similar to that in the U.K. Further discussion on the DREEM inventory subscale analysis is provided in this study.

The DREEM score is broken down into five specific subscales: SPoL (Perceptions of Learning), SpoT (Perceptions of Teaching), SASP (Academic Self-Perceptions), SpoA (Perceptions of Atmosphere), and SSSP (socials Selp-Perceptions). Based on the subscale scores discussed in this study, most participants positively perceived their educational environment. According to the SPoL, SpoT, SASP, SpoA, and SSSP scores, educational environment was more positive than negative (27.2, 23.1, 18.3, and 15.7, respectively). Clearly, students were optimistic about their learning experiences, teaching quality, academic self-confidence, and atmosphere within the institution. A brief comparison of the study's results with those of other Saudi Arabian universities, including Jazan University, Qassim University, and King Khalid University, is provided. This study does not provide specific DREEM scores for these universities, but suggests that the findings are consistent with those from other Saudi Arabian institutions. Based on this consistency, students generally perceive these universities as positive for their educational environment.

Focusing on the current study's comparison provides valuable insights into the DREEM scores from various universities worldwide. Students in the current study viewed their educational environment positively, highlighting the value of DREEM's subscale analysis in understanding specific aspects of the educational environment. More detailed information is required for comprehensive conclusions and understanding of the full implications of these findings, including the exact DREEM score from the current study.

'Students' Perceptions of Learning (SPoL) & 'Students' Perceptions of Teachers (SPoT)

Only 53% ( n  = 170), with an average mean of 27.2 ± 9.444, showed a positive perception of learning, and 50% of them ( n  = 160), with a mean of 23.01 ± 7.904, described that teachers were moving in the right direction, as shown in Tables 2 and 3 . This is mainly due to the continuous professional development program implemented by the college and university, which aimed to enhance the faculty's capacities in teaching and learning to include preparation and delivery of the teaching materials, development of a blueprint, and student assessment. The college also has a stringent recruitment process for selecting only the most qualified candidates with excellent teaching backgrounds and high GPAs. Peer assessment was used to evaluate the colleges’ teaching and learning methods to ensure that they performed as expected. The strengths and areas for improvement highlighted in the peer evaluation report were used to inform the continuing professional development goals for the following year. Annual faculty evaluation is also a tool to improve a college's educational atmosphere and pedagogy.

In this study, the authors examined how college and university students perceive their learning and teachers. It was found that 53% of the participants reported that education was positive, while 50% indicated that teachers were making progress. The average means of these scores were 27.2 ± 9.444 for SPoL and 23.01 ± 7.904 for SPoT. Indicators of students' experiences with the learning process and their perceptions of teachers' effectiveness were the DREEM scores for SPoL and SPoT. A SPoL score of 27.2 suggests that slightly more than half of the participants were satisfied with their learning experiences. However, a score of 23.01 for SPoT indicates that about half of the students are satisfied with the teaching methods and approaches, which suggests that they feel their teachers are moving in the right direction.

Positive perceptions of learning and teachers can be attributed to the continuous professional development programs in universities and colleges. The faculty's capacity for teaching and learning is enhanced by continuing professional development. To create a more dynamic and effective learning environment for students, institutions should provide faculty members with opportunities to improve their teaching skills and stay current with the latest teaching methodologies.

Through its stringent recruitment process, the college selects only qualified candidates with excellent teaching backgrounds and high GPAs, resulting in a higher quality faculty and a better educational experience for students. Selecting competent teachers is a critical component of ensuring high-quality instruction for students.

A positive aspect of the college approach is the use of peer assessment to evaluate teaching and learning methods. Experienced colleagues provide unbiased feedback in peer evaluations, highlighting strengths and improvement areas. Using the findings from the peer evaluation report, faculty members can set goals for professional development, ensuring that they address areas that need improvement. Annual faculty evaluation is a valuable tool for assessing and improving a college's educational environment and pedagogy. Using faculty evaluations can provide insights into how well instructors interact with their students, create a supportive learning environment, and adapt teaching methods to meet students' needs. As a result, the college can identify areas for improvement and make data-driven decisions that will enhance the educational experience of all students.

Due to the lack of explicit comparisons with results from other colleges, we cannot directly assess how the college's DREEM scores for SPoL and SPoT compare with those of other colleges. However, continuous professional development programs, peer evaluations, and annual faculty evaluations indicate that the college is taking proactive measures to ensure a positive educational environment for students. Such practices reflect this commitment to academic excellence and student success. The study concludes by emphasizing the importance of SPoLs and teachers. The college and university's continuous professional development programs, stringent recruitment processes, peer evaluations, and annual faculty evaluations were responsible for the positive perceptions of SPoL and SPoT. Emphasis on improving the quality of education and creating a conducive learning environment is evident in these practices. The study does not directly compare college's results with those of other institutions, but the practices mentioned suggest a proactive approach to fostering a positive educational environment.

'Students' Academic-self-perception

Approximately 54% of participants ( n  = 173) felt positive with the mean result of 18.29 ± 6.560 as shown in Table a mean result of 18.29 ± 6.560, as shown in Tables 2 and 3 . The mean result of 18.29 ± 6.560, as shown in Tables 2 and 3 closely relates to the ability of the 'students' to cope with the academic workload [ 26 ]. A well-designed and developed course timetable with more self-directed learning sessions allocated is a leading cause of this positive perception, as seen in the Australian DREEM study [ 36 ]. Many extracurricular activities aligned with program learning outcomes implemented within the course schedule gave the students free time to learn some non-technical skills in pressure-free time, supporting positive perception. The study examined students' perceptions of their ability to cope with the academic workload, with approximately 54% of participants reporting feeling positive. The mean result for this aspect was 18.29 ± 6.560.A significant finding was students' positive perception of their ability to cope with academic workloads, with a mean score of 18.29, above the midpoint of the DREEM scale, indicating that many participants felt confident in managing their academic responsibilities. This positive perception can positively impact student well-being and academic performance.When students feel capable of handling their workload, they are likely to experience less stress and anxiety, which can lead to improved learning outcomes.

According to this study, students' perceptions of their ability to cope with academic workloads are influenced by several factors. Course timetables are essential to students' perceptions of their academic workload. Having self-directed learning sessions in the timetable allows students to manage their time effectively and to control their learning pace. Students are empowered to take control of their learning journey using this approach, which is aligned with active learning and student-centered education principles. A positive perception of coping with academic workload is also supported by the implementation of extracurricular activities that align with program-learning outcomes. A well-rounded educational experience can be achieved by participating in extracurricular activities beyond the core academic curriculum. Students benefit from these activities regarding personal growth, skill development, and social interaction, all of which can reduce stress and improve their overall wellbeing.

Consequently, based on the study's lack of explicit comparisons, we cannot directly compare the current colleges’ scores for dealing with academic workload with those of other colleges or institutions. While a mean score of 18.29 is above the midpoint of the DREEM scale, it indicates a positive perception among participants. Students can manage their academic demands effectively, which is an encouraging sign of their commitment to creating a conducive learning environment. As a result, students' perceptions of their ability to cope with academic workloads were positive. This positive perception is partly attributed to well-designed course schedules with self-directed learning sessions and the implementation of extracurricular activities aligned with the learning outcomes. Students' well-being and academic performance can be enhanced if they positively perceive academic workload management. Further research and comparison could provide a better understanding of students' overall academic experiences compared with other colleges.

'Students' Perceptions of Atmosphere (SPoA)

By “ learning resources”, authors mean things like the physical layout of classrooms and clinics and the attitude and demeanor of instructors during class and patient care. It comprises academic regulations and planning of the academic curriculum. Tables 2 and 3 show that 54% of the students ( n  = 174) felt that the environment had improved. The mean score in this group was 27.339.342. The results are encouraging based on the study's findings at Taibah University's College of Medicine [ 37 ]. The positivity of student perception is based on well-designed timetables, a motivating environment, a wide range of extracurricular activities offered to students to enhance and encourage their interpersonal skills, and academic advisory services, such as academic, psychological, behavioral, and career counseling.

The study discusses the results of SPoL resources, which encompass various aspects, such as the physical layout of classrooms and clinics, instructor attitudes and demeanor during class and patient care, academic regulations, and curriculum planning. Based on a study conducted at Taibah University College of Medicine, 54% of students ( n  = 174) felt that the learning environment had improved, with a mean score of 27.339.342. These results were encouraging. It is noteworthy that 54% of the students positively perceived improved learning resources. A mean score of 27.339.342 indicated that most students perceived positive changes. Positive perceptions suggest that students are satisfied with various aspects of their academic experience such as physical facilities, instructor attitudes, and academic structure.

A well-designed timetable plays an essential role in shaping students' academic experiences. This study identified several factors that influenced students' positive perceptions of learning resources. Optimizing class sessions, clinical rotations, and study time can help students to achieve a balanced and effective learning schedule. Students can better manage their academic workload when their timetables are organized, allowing for a smoother flow of learning activities.

Supportive and encouraging environments can inspire students to strive for academic excellence and to actively participate in their educational journey. Motivating environments foster enthusiasm and engagement among students, which can positively affect learning outcomes. It is beneficial for students' interpersonal skills to participate in various extracurricular activities.Essential life skills, such as teamwork, leadership, and communication, can be developed through such activities beyond the traditional academic setting. Student support can be provided in various ways, including academic, psychological, behavioral, and career counseling. AsAsThrough such advisory services, students can navigate academic challenges and make informed career decisions.

The results of a study conducted by Taibah University College of Medicine are described as encouraging. However, the Taibah University study did not provide specific details regarding the DREEM scores or the learning resources evaluated. As a result, direct comparison is difficult. Ultimately, students' positive perceptions of improved learning resources were encouraging.. Students expressed satisfaction with various aspects of their learning environments. In addition to well-designed timetables, a motivating environment, extracurricular activities, and academic advisory services, the college strives to enhance students' overall learning experiences. Despite the comparison with the Taibah University study being mentioned as encouraging, more comprehensive details and further research are needed to draw meaningful conclusions and understand the SPoL resources across colleges and institutions in a broader context.

'Students' Social Self-Perceptions (SSSP)

In the current study, 50% of students with a mean of 15.66 ± 5.813 perceived social life as more positive, as shown in Tables 2 and 3 , as such various institutes in SA as Jazan University [ 29 ] Qassim University [ 26 ], and King Khalid University [ 25 ]. This is similar to studies conducted in Sudan 17/28 [ 38 ], Pakistan 15.4/28 [ 39 ], and Malaysia 16.7/28 [ 33 ]. It partially attributed the finding of a good social life in this study to the extracurricular activities offered by the college and the Deanship of Student Affairs at the university level. The curriculum is type-centered, with many active learning activities that increase student socialization with colleagues and tutors. In addition, academic advisory services play a significant role in determining social life, providing an excellent psychological support and feedback system for relevant students. In conclusion, this study showed that monitoring the educational environment could provide important information that medical educators should use to address the challenges that need attention and implement improvement changes.

According to the study, half of the students perceived social life more positively, as indicated by a mean score of 15.66 ± 5.813. In addition to comparing these findings with those of other Saudi Arabian universities, this study also compares them with those of Sudan, Pakistan, and Malaysia. According to the current study, extracurricular activities, type-centered curricula with active learning activities, and academic advisory services providing psychological support and feedback to students contributed to the positive perception of social life. Students perceived social life positively in 50% of cases, with a mean score of 15.66. The DREEM scale for social life ranges from 0 to 28, with higher scores indicating a more positive view. A mean score of 15.66 indicates that social life within the educational environment has room for improvement. Social life is viewed positively by several factors identified in this study.

Students' socialization is likely to be promoted by extracurricular activities offered by the college and by the university’s Deanship of Student Affairs. Such activities allow students to interact with their peers outside the classroom, thereby fostering social connections and a sense of belonging. Student engagement and tutor–student interaction can be increased through a type-centered curriculum with active learning activities. Students collaborate and build relationships with their instructors and each other through active learning approaches such as group discussions, team-based projects, and hands-on learning experiences.

Students can benefit from the psychological support and feedback offered by academic advisory services, which can enhance their social life. This can contribute to a more positive social experience for students who receive guidance and mentorship from advisors. The study results were compared with those from other institutes in Saudi Arabia, Sudan, Pakistan, and Malaysia regarding social life perception. Although these studies provide specific DREEM scores, it is evident that perceptions of social life across institutions are similar. Based on the comparable scores, social life deserves attention and improvement in various educational environments.

The study concluded that the educational environment and students' perceptions of social life should be monitored. According to 50% of the participants, extracurricular activities, a type-centered curriculum with active learning, and academic advisory services contribute to a positive social experience. This emphasizes the need for medical educators to address challenges and implement changes to enhance students' social experiences, even though there is still room for improvement. Educators can create a more supportive and enriched social environment for students by understanding their perceptions.

According to the DREEM inventory evaluations, the educational environment at Prince Nora University in Saudi Arabia has generally been positively perceived. The students' recognition of academic ambience appears exceptionally high, although there is scope for bolstering SocialSselP-perceptions. Notably, the DREEM inventory has emerged as a powerful and all-encompassing tool to gauge different facets of the educational environment, proving its irreplaceable value in this scenario. Consequently, it is suggested that other Saudi Arabian universities might find it beneficial to adopt this tool to pinpoint possible difficulties and opportunities for the enhancement of their educational landscapes. Other Saudi Arabian universities should consider adopting the DREEM inventory to identify areas for improvement and opportunities to enhance their own educational settings based on the positive perception of the educational environment at Prince Nora University and the effectiveness of the DREEM inventory.Declaration.

Availability of data and materials

All authors have shared their raw data and attached them to the supplementary files.

Abbreviations

Dundee Ready Environment Educational Measure

Health professional education

National Commission for Accreditation and Assessment

Princess Noura University

Students' Academic Self-Perceptions

Student Perceptions of Atmosphere

Students' perceptions of learning

Students' Perceptions of Teachers

Student Social Self-Perceptions

Roff S, et al. Development and validation of the Dundee ready education environment measure (DREEM). Med Teach. 1997;19(4):295–9.

Article   Google Scholar  

Genn J. AMEE Medical Education Guide No. 23 (Part 1): Curriculum, environment, climate, quality and change in medical education–a unifying perspective. Medical teacher. 2001;23(4):337–44.

Carron PN, Trueb L, Yersin B. High-fidelity simulation in the nonmedical domain: practices and potential transferable competencies for the medical field. Adv Med Educ Pract. 2011;2:149–55.

Kaya VH. Teachers’ Self-Efficacy in Terms of Former Experience and Professional Development in the Turkish World Based on TALIS 2018 Data: Sample of Turkey and Kazakhstan. 2021. Online Submission.

General Medical Council. The state of medical education and practice in the UK 2013. General Medical Council. 2013.

Al-Shiekh MH, Ismail MH, Al-Khater SA. Validation of the postgraduate hospital educational environment measure at a Saudi university medical school. Saudi Med J. 2014;35(7):734–8.

Google Scholar  

Aburizaizah SJ. The role of quality assurance in Saudi higher education institutions. Int J Educ Res Open. 2022;3:100127.

Almrstani AM, Alnoman A, Abduljabbar H, Sait H, Bazarah M, Eldeek B. Evaluation of the Assessment Plan for Undergraduate Clerkship in Obstetrics and Gynecology, King Abdulaziz University. J Med Educ Curr Dev. 2014;1:JMECD-S18463.

Almaleki D. Examinee Characteristics and Their Impact on the Psychometric Properties of a Multiple Choice Test According to the Item Response Theory (IRT). Eng Technol Appl Sci Res. 2021;11:6889–901.

Miles S, Swift L, Leinster SJ. The Dundee Ready Education Environment Measure (DREEM): a review of its adoption and use. Med Teach. 2012;34(9):e620–34.

Al-Hazimi A, et al. Educational environment in traditional and innovative medical schools: a study in four undergraduate medical schools. Educ Health-Abingdon-Carfax Publishing Limited-. 2004;17(2):192–203.

Roff S, et al. A global diagnostic tool for measuring educational environment: comparing Nigeria and Nepal. Med Teach. 2001;23(4):378–82.

Feletti G, Clarke R. Review of psychometric features of the medical school learning environment survey. Med Educ. 1981;15(2):92–6.

Ingoglia NA. Perspective: a proposal to establish master’s in biomedical sciences degree programs in medical school environments. Acad Med. 2009;84(4):464–7.

Levy M, et al. Use of the learning environment questionnaire to assess curricular change. Acad Med. 1973;48(9):840–5.

Marshall RE. Measuring the medical school learning environment. Acad Med. 1978;53(2):98–104.

Rothman AI, Ayoade F. The development of a learning environment: a questionnaire for use in curriculum evaluation. Acad Med. 1970;45(10):754–9.

Wakeford RE. Students’perception of the medical school learning environment: a pilot study into some differences and similarities between clinical schools in the UK. Asses Higher Educ. 1981;6(3):206–17.

Roff S. The Dundee Ready Educational Environment Measure (DREEM)—a generic instrument for measuring students’ perceptions of undergraduate health professions curricula. Med Teach. 2005;27(4):322–5.

Roff S, McAleer S. What is educational climate? Med Teach. 2001;23(4):333–4.

Cochran WG. Sampling techniques. John Wiley & Sons; 1977.

Soemantri D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: a systematic review. Med Teach. 2010;32(12):947–52.

Gruppen LD, et al. Conceptualizing learning environments in the health professions. Acad Med. 2019;94(7):969–74.

Headrick G, et al. State Implementation of SNAP Waivers and Flexibilities During the COVID-19 Pandemic: perspectives from state agency leaders. J Nutr Educ Behav. 2022;54(11):982–97.

Alshehri SA, Alshehri AF, Erwin TD. Measuring the medical school educational environment: validating an approach from Saudi Arabia. Health Educ J. 2012;71(5):553–64.

Al-Mohaimeed A. Perceptions of the educational environment of a new medical school, Saudi Arabia. Int J Health Sci. 2013;7(2):150.

Al-Kabbaa AF, et al. Perception of the learning environment by students in a new medical school in Saudi Arabia: areas of concern. J Taibah Univ Med Sci. 2012;7(2):69–75.

Altemani AH, Merghani TH. The quality of the educational environment in a medical college in Saudi Arabia. Int J Med Educ. 2017;8:128.

Essa M. Students perceptions of learning environment in Jazan Medical School in Saudi Arabia. 2022.

Book   Google Scholar  

Al-Hazimi A, Al-Hyiani A, Roff S. Perceptions of the educational environment of the medical school in King Abdul Aziz University, Saudi Arabia. Med Teach. 2004;26(6):570–3.

Dunne F, McAleer S, Roff S. Assessment of the undergraduate medical education environment in a large UK medical school. Health Educ J. 2006;65(2):149–58.

Hamdan HZ, et al. Medical Students’ Perception of the Educational Environment at Al-Nahda College by using DREEM inventory. 2019.

Al-Naggar RA, et al. The Malaysia DREEM: perceptions of medical students about the learning environment in a medical school in Malaysia. Adv Med Educ Pract. 2014;5:177–84.

Demiroren M, et al. Perceptions of students in different phases of Medicai education of educational environment: Ankara University Facuity of medicine. Med Educ Online. 2008;13(1):4477.

Jiffry M, et al. Using the DREEM questionnaire to gather baseline information on an evolving medical school in Sri Lanka. Med Teach. 2005;27(4):348–52.

Brown T, Williams B, Lynch M. The Australian DREEM: evaluating student perceptions of academic learning environments within eight health science courses. Int J Med Educ. 2011;2:94.

Mojaddidi MA, et al. Reassessment of the undergraduate educational environment in college of medicine, Taibah university, Almadinah Almunawwarah. Saudi Arabia Medical teacher. 2013;35(sup1):S39–46.

Salih KM, et al. MBBS teaching program, according to DREEM in College of Medicine, University of Bahri, Khartoum, Sudan. Adv Med Educ Pract. 2018;9:617–22.

Khursheed I, Baig L. Students’ perceptions of educational environment of a private medical school in Pakistan. J Pak Med Assoc. 2014;64(11):1244–9.

Download references

Acknowledgements

The authors appreciate Princess Nourah Bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R283), Princess Nourah Bint Abdulrahman University, Riyadh, SA.

The authors thank Princess Nourah Bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R283), Princess Nourah Bint Abdulrahman University, Riyadh, SA.

Author information

Authors and affiliations.

Department of Educational Technology, College of Education, Princess NourahBint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia

Latefa Hamad Al Fryan

Applied College, Education (Curriculum and Instruction), Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia

Mahasin Ibrahim Shomo

Department of Pathological Sciences, College of Medicine, Ajman University, Ajman, UAE

Ibrahim A. Bani

Department of Chronic Disease Epidemiology, Yale School of Public Health, Yale University, New Haven, USA

You can also search for this author in PubMed   Google Scholar

Contributions

Ibrahim Bani designed the research, conceived the idea, developed the theory, verified the analytical methods, supervised the findings of this work, and prepared and edited the manuscript. On the other hand, Latefa Hamad Al Fryan and Mahasin Ibrahim Shomo were responsible for data collection and analysis, which are commonly used to present complex information in a concise and understandable format.

Corresponding author

Correspondence to Ibrahim A. Bani .

Ethics declarations

Ethics approval and consent to participate.

The authors confirm that all methods were carried out following the relevant guidelines and regulations. Moreover, the Authors confirm that the committee Chairman approved all protocols, Institutional Review Board Registration Number with KACST, KSA (HAP-01-R-059), at Princess Nourah Bint Abdulrahman University, Riyadh, KSA. Informed consent was obtained from all subjects and/or their legal guardians (s).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Al Fryan, L.H., Shomo, M.I. & Bani, I.A. Assessment of the ‘students’ perceptions of education using Dundee Ready Environment Educational Measure (DREEM) inventory at Princess Nora bint Abdulrahman University, Saudi Arabia. BMC Med Educ 24 , 928 (2024). https://doi.org/10.1186/s12909-024-05870-9

Download citation

Received : 30 April 2023

Accepted : 06 August 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05870-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • DREEM inventory
  • Student perceptions
  • Educational environment
  • Learner attitudes
  • Saudi Arabian education

BMC Medical Education

ISSN: 1472-6920

quantitative research environment meaning

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Key things to know about U.S. election polling in 2024

Conceptual image of an oversized voting ballot box in a large crowd of people with shallow depth of field

Confidence in U.S. public opinion polling was shaken by errors in 2016 and 2020. In both years’ general elections, many polls underestimated the strength of Republican candidates, including Donald Trump. These errors laid bare some real limitations of polling.

In the midterms that followed those elections, polling performed better . But many Americans remain skeptical that it can paint an accurate portrait of the public’s political preferences.

Restoring people’s confidence in polling is an important goal, because robust and independent public polling has a critical role to play in a democratic society. It gathers and publishes information about the well-being of the public and about citizens’ views on major issues. And it provides an important counterweight to people in power, or those seeking power, when they make claims about “what the people want.”

The challenges facing polling are undeniable. In addition to the longstanding issues of rising nonresponse and cost, summer 2024 brought extraordinary events that transformed the presidential race . The good news is that people with deep knowledge of polling are working hard to fix the problems exposed in 2016 and 2020, experimenting with more data sources and interview approaches than ever before. Still, polls are more useful to the public if people have realistic expectations about what surveys can do well – and what they cannot.

With that in mind, here are some key points to know about polling heading into this year’s presidential election.

Probability sampling (or “random sampling”). This refers to a polling method in which survey participants are recruited using random sampling from a database or list that includes nearly everyone in the population. The pollster selects the sample. The survey is not open for anyone who wants to sign up.

Online opt-in polling (or “nonprobability sampling”). These polls are recruited using a variety of methods that are sometimes referred to as “convenience sampling.” Respondents come from a variety of online sources such as ads on social media or search engines, websites offering rewards in exchange for survey participation, or self-enrollment. Unlike surveys with probability samples, people can volunteer to participate in opt-in surveys.

Nonresponse and nonresponse bias. Nonresponse is when someone sampled for a survey does not participate. Nonresponse bias occurs when the pattern of nonresponse leads to error in a poll estimate. For example, college graduates are more likely than those without a degree to participate in surveys, leading to the potential that the share of college graduates in the resulting sample will be too high.

Mode of interview. This refers to the format in which respondents are presented with and respond to survey questions. The most common modes are online, live telephone, text message and paper. Some polls use more than one mode.

Weighting. This is a statistical procedure pollsters perform to make their survey align with the broader population on key characteristics like age, race, etc. For example, if a survey has too many college graduates compared with their share in the population, people without a college degree are “weighted up” to match the proper share.

How are election polls being conducted?

Pollsters are making changes in response to the problems in previous elections. As a result, polling is different today than in 2016. Most U.S. polling organizations that conducted and publicly released national surveys in both 2016 and 2022 (61%) used methods in 2022 that differed from what they used in 2016 . And change has continued since 2022.

A sand chart showing that, as the number of public pollsters in the U.S. has grown, survey methods have become more diverse.

One change is that the number of active polling organizations has grown significantly, indicating that there are fewer barriers to entry into the polling field. The number of organizations that conduct national election polls more than doubled between 2000 and 2022.

This growth has been driven largely by pollsters using inexpensive opt-in sampling methods. But previous Pew Research Center analyses have demonstrated how surveys that use nonprobability sampling may have errors twice as large , on average, as those that use probability sampling.

The second change is that many of the more prominent polling organizations that use probability sampling – including Pew Research Center – have shifted from conducting polls primarily by telephone to using online methods, or some combination of online, mail and telephone. The result is that polling methodologies are far more diverse now than in the past.

(For more about how public opinion polling works, including a chapter on election polls, read our short online course on public opinion polling basics .)

All good polling relies on statistical adjustment called “weighting,” which makes sure that the survey sample aligns with the broader population on key characteristics. Historically, public opinion researchers have adjusted their data using a core set of demographic variables to correct imbalances between the survey sample and the population.

But there is a growing realization among survey researchers that weighting a poll on just a few variables like age, race and gender is insufficient for getting accurate results. Some groups of people – such as older adults and college graduates – are more likely to take surveys, which can lead to errors that are too sizable for a simple three- or four-variable adjustment to work well. Adjusting on more variables produces more accurate results, according to Center studies in 2016 and 2018 .

A number of pollsters have taken this lesson to heart. For example, recent high-quality polls by Gallup and The New York Times/Siena College adjusted on eight and 12 variables, respectively. Our own polls typically adjust on 12 variables . In a perfect world, it wouldn’t be necessary to have that much intervention by the pollster. But the real world of survey research is not perfect.

quantitative research environment meaning

Predicting who will vote is critical – and difficult. Preelection polls face one crucial challenge that routine opinion polls do not: determining who of the people surveyed will actually cast a ballot.

Roughly a third of eligible Americans do not vote in presidential elections , despite the enormous attention paid to these contests. Determining who will abstain is difficult because people can’t perfectly predict their future behavior – and because many people feel social pressure to say they’ll vote even if it’s unlikely.

No one knows the profile of voters ahead of Election Day. We can’t know for sure whether young people will turn out in greater numbers than usual, or whether key racial or ethnic groups will do so. This means pollsters are left to make educated guesses about turnout, often using a mix of historical data and current measures of voting enthusiasm. This is very different from routine opinion polls, which mostly do not ask about people’s future intentions.

When major news breaks, a poll’s timing can matter. Public opinion on most issues is remarkably stable, so you don’t necessarily need a recent poll about an issue to get a sense of what people think about it. But dramatic events can and do change public opinion , especially when people are first learning about a new topic. For example, polls this summer saw notable changes in voter attitudes following Joe Biden’s withdrawal from the presidential race. Polls taken immediately after a major event may pick up a shift in public opinion, but those shifts are sometimes short-lived. Polls fielded weeks or months later are what allow us to see whether an event has had a long-term impact on the public’s psyche.

How accurate are polls?

The answer to this question depends on what you want polls to do. Polls are used for all kinds of purposes in addition to showing who’s ahead and who’s behind in a campaign. Fair or not, however, the accuracy of election polling is usually judged by how closely the polls matched the outcome of the election.

A diverging bar chart showing polling errors in U.S. presidential elections.

By this standard, polling in 2016 and 2020 performed poorly. In both years, state polling was characterized by serious errors. National polling did reasonably well in 2016 but faltered in 2020.

In 2020, a post-election review of polling by the American Association for Public Opinion Research (AAPOR) found that “the 2020 polls featured polling error of an unusual magnitude: It was the highest in 40 years for the national popular vote and the highest in at least 20 years for state-level estimates of the vote in presidential, senatorial, and gubernatorial contests.”

How big were the errors? Polls conducted in the last two weeks before the election suggested that Biden’s margin over Trump was nearly twice as large as it ended up being in the final national vote tally.

Errors of this size make it difficult to be confident about who is leading if the election is closely contested, as many U.S. elections are .

Pollsters are rightly working to improve the accuracy of their polls. But even an error of 4 or 5 percentage points isn’t too concerning if the purpose of the poll is to describe whether the public has favorable or unfavorable opinions about candidates , or to show which issues matter to which voters. And on questions that gauge where people stand on issues, we usually want to know broadly where the public stands. We don’t necessarily need to know the precise share of Americans who say, for example, that climate change is mostly caused by human activity. Even judged by its performance in recent elections, polling can still provide a faithful picture of public sentiment on the important issues of the day.

The 2022 midterms saw generally accurate polling, despite a wave of partisan polls predicting a broad Republican victory. In fact, FiveThirtyEight found that “polls were more accurate in 2022 than in any cycle since at least 1998, with almost no bias toward either party.” Moreover, a handful of contrarian polls that predicted a 2022 “red wave” largely washed out when the votes were tallied. In sum, if we focus on polling in the most recent national election, there’s plenty of reason to be encouraged.

Compared with other elections in the past 20 years, polls have been less accurate when Donald Trump is on the ballot. Preelection surveys suffered from large errors – especially at the state level – in 2016 and 2020, when Trump was standing for election. But they performed reasonably well in the 2018 and 2022 midterms, when he was not.

Pew Research Center illustration

During the 2016 campaign, observers speculated about the possibility that Trump supporters might be less willing to express their support to a pollster – a phenomenon sometimes described as the “shy Trump effect.” But a committee of polling experts evaluated five different tests of the “shy Trump” theory and turned up little to no evidence for each one . Later, Pew Research Center and, in a separate test, a researcher from Yale also found little to no evidence in support of the claim.

Instead, two other explanations are more likely. One is about the difficulty of estimating who will turn out to vote. Research has found that Trump is popular among people who tend to sit out midterms but turn out for him in presidential election years. Since pollsters often use past turnout to predict who will vote, it can be difficult to anticipate when irregular voters will actually show up.

The other explanation is that Republicans in the Trump era have become a little less likely than Democrats to participate in polls . Pollsters call this “partisan nonresponse bias.” Surprisingly, polls historically have not shown any particular pattern of favoring one side or the other. The errors that favored Democratic candidates in the past eight years may be a result of the growth of political polarization, along with declining trust among conservatives in news organizations and other institutions that conduct polls.

Whatever the cause, the fact that Trump is again the nominee of the Republican Party means that pollsters must be especially careful to make sure all segments of the population are properly represented in surveys.

The real margin of error is often about double the one reported. A typical election poll sample of about 1,000 people has a margin of sampling error that’s about plus or minus 3 percentage points. That number expresses the uncertainty that results from taking a sample of the population rather than interviewing everyone . Random samples are likely to differ a little from the population just by chance, in the same way that the quality of your hand in a card game varies from one deal to the next.

A table showing that sampling error is not the only kind of polling error.

The problem is that sampling error is not the only kind of error that affects a poll. Those other kinds of error, in fact, can be as large or larger than sampling error. Consequently, the reported margin of error can lead people to think that polls are more accurate than they really are.

There are three other, equally important sources of error in polling: noncoverage error , where not all the target population has a chance of being sampled; nonresponse error, where certain groups of people may be less likely to participate; and measurement error, where people may not properly understand the questions or misreport their opinions. Not only does the margin of error fail to account for those other sources of potential error, putting a number only on sampling error implies to the public that other kinds of error do not exist.

Several recent studies show that the average total error in a poll estimate may be closer to twice as large as that implied by a typical margin of sampling error. This hidden error underscores the fact that polls may not be precise enough to call the winner in a close election.

Other important things to remember

Transparency in how a poll was conducted is associated with better accuracy . The polling industry has several platforms and initiatives aimed at promoting transparency in survey methodology. These include AAPOR’s transparency initiative and the Roper Center archive . Polling organizations that participate in these organizations have less error, on average, than those that don’t participate, an analysis by FiveThirtyEight found .

Participation in these transparency efforts does not guarantee that a poll is rigorous, but it is undoubtedly a positive signal. Transparency in polling means disclosing essential information, including the poll’s sponsor, the data collection firm, where and how participants were selected, modes of interview, field dates, sample size, question wording, and weighting procedures.

There is evidence that when the public is told that a candidate is extremely likely to win, some people may be less likely to vote . Following the 2016 election, many people wondered whether the pervasive forecasts that seemed to all but guarantee a Hillary Clinton victory – two modelers put her chances at 99% – led some would-be voters to conclude that the race was effectively over and that their vote would not make a difference. There is scientific research to back up that claim: A team of researchers found experimental evidence that when people have high confidence that one candidate will win, they are less likely to vote. This helps explain why some polling analysts say elections should be covered using traditional polling estimates and margins of error rather than speculative win probabilities (also known as “probabilistic forecasts”).

National polls tell us what the entire public thinks about the presidential candidates, but the outcome of the election is determined state by state in the Electoral College . The 2000 and 2016 presidential elections demonstrated a difficult truth: The candidate with the largest share of support among all voters in the United States sometimes loses the election. In those two elections, the national popular vote winners (Al Gore and Hillary Clinton) lost the election in the Electoral College (to George W. Bush and Donald Trump). In recent years, analysts have shown that Republican candidates do somewhat better in the Electoral College than in the popular vote because every state gets three electoral votes regardless of population – and many less-populated states are rural and more Republican.

For some, this raises the question: What is the use of national polls if they don’t tell us who is likely to win the presidency? In fact, national polls try to gauge the opinions of all Americans, regardless of whether they live in a battleground state like Pennsylvania, a reliably red state like Idaho or a reliably blue state like Rhode Island. In short, national polls tell us what the entire citizenry is thinking. Polls that focus only on the competitive states run the risk of giving too little attention to the needs and views of the vast majority of Americans who live in uncompetitive states – about 80%.

Fortunately, this is not how most pollsters view the world . As the noted political scientist Sidney Verba explained, “Surveys produce just what democracy is supposed to produce – equal representation of all citizens.”

  • Survey Methods
  • Trust, Facts & Democracy
  • Voter Files

Download Scott Keeter's photo

Scott Keeter is a senior survey advisor at Pew Research Center .

Download Courtney Kennedy's photo

Courtney Kennedy is Vice President of Methods and Innovation at Pew Research Center .

How do people in the U.S. take Pew Research Center surveys, anyway?

How public polling has changed in the 21st century, what 2020’s election poll errors tell us about the accuracy of issue polling, a field guide to polling: election 2020 edition, methods 101: how is polling done around the world, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

IMAGES

  1. Quantitative Research: Definition, Methods, Types and Examples

    quantitative research environment meaning

  2. Quantitative Research

    quantitative research environment meaning

  3. What Is Quantitative Research

    quantitative research environment meaning

  4. PPT

    quantitative research environment meaning

  5. examples of quantitative research methods

    quantitative research environment meaning

  6. What is the Importance of Quantitative Research in the Environment

    quantitative research environment meaning

VIDEO

  1. Quantitative Research || Research || Meaning and Characteristics

  2. Quantitative Research :Methods, Advantages, Disadvantages

  3. Importance of Quantitative Research Across Fields

  4. Quantitative Research Purposes: Updating the Previous Theories

  5. Importance of the Business Environment//Business Environment Meaning with Examples

  6. Importance of the Business Environment//Business Environment Meaning with Examples

COMMENTS

  1. What is Quantitative Research? Definition, Methods, Types, and Examples

    Before adopting quantitative research for your study, you need to understand what is quantitative research. Read this article to learn the quantitative research definition, key characteristics, types of quantitative research, methods and examples, and pros and cons of quantitative research.

  2. Quantitative Research

    Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.

  3. Qualitative vs Quantitative Research: What's the Difference?

    The main difference between quantitative and qualitative research is the type of data they collect and analyze. Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed numerically. Quantitative research is often used to test ...

  4. What really matters for successful research environments? A realist

    Research environments, or cultures, are thought to be the most influential predictors of research productivity. Although several narrative and systematic reviews have begun to identify the characteristics of research‐favourable environments, these reviews have ignored the contextual complexities and multiplicity of environmental characteristics.

  5. What is Quantitative Research According to Authors?

    Quantitative research is a pivotal aspect of academic inquiry, and understanding its fundamentals is crucial for anyone venturing into the realm of research. We'll explore the definitions and perspectives of quantitative research according to John Creswell, along with other notable scholars in the field. These insights are not only foundational for grasping the essence of quantitative ...

  6. (PDF) An Overview of Quantitative Research Methods

    quantitative research are: Describing a problem statement by presenting the need for an explanation of a variable's relationship. Offering literature, a significant function by answering research ...

  7. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

  8. Quantitative research

    Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health & human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the ...

  9. A Practical Guide to Writing Quantitative and Qualitative Research

    It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or ...

  10. Definition of quantitative research and its importance

    Quantitative research is an important part of market research that relies on hard facts and numerical data to gain as objective a picture of people's opinions as possible. It's different from qualitative research in a number of important ways and is a highly useful tool for researchers. Quantitative research is a systematic empirical ...

  11. What is Quantitative Research?

    Quantitative research is a systematic data collection and analysis approach emphasizing quantifiable and numerical data. It employs statistical and computational techniques to measure, analyze, and interpret phenomena to uncover patterns, relationships, and trends. Unlike qualitative research, which focuses on subjective experiences and ...

  12. What Is Qualitative vs. Quantitative Study?

    What is Qualitative Research? Qualitative research differs from quantitative research in its objectives, techniques, and design. Qualitative research aims to gain insights into phenomena, groups, or experiences that cannot be objectively measured or quantified using mathematics. Instead of seeking to uncover precise answers or statistics in a controlled environment like quantitative research ...

  13. (PDF) Quantitative Research: A Successful Investigation in Natural and

    Quantitative research applies statisti cal tests, such as mean, median, and standard deviation, t-tests, multiple regression correlations (MRC), analysis of variances (ANOVAs), etc. Quantitative ...

  14. Synthesising quantitative and qualitative evidence to inform guidelines

    Consideration is given to the opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence.

  15. (PDF) Research Environment

    PDF | Successful research environment requires joint effort by individual researchers, research groups and the organization. This chapter describes the... | Find, read and cite all the research ...

  16. Q: What is meant by the setting of the study?

    Research setting is an important component of research design/methodology. If you have been asked to describe the setting of your study, note any aspects related to the environment in which your study is being conducted. You may want to refer to the author guidelines of your target journal to confirm which specific details the journal requires.

  17. Variables in Research

    Here are some examples: Scientific research: Variables are used in scientific research to understand the relationships between different factors and to make predictions about future outcomes. For example, scientists may study the effects of different variables on plant growth or the impact of environmental factors on animal behavior.

  18. What is Descriptive Research? Definition, Methods, Types and Examples

    Descriptive research design is employed across diverse fields, and its primary objective is to systematically observe and document all variables and conditions influencing the phenomenon. Read this comprehensive article to know what descriptive research is and the different methods, types and examples.

  19. Organizational behavior

    Organizational behavior or organisational behaviour (see spelling differences) is the "study of human behavior in organizational settings, the interface between human behavior and the organization, and the organization itself". [1] Organizational behavioral research can be categorized in at least three ways: [2] individuals in organizations (micro-level)

  20. Deductive reasoning

    Deductive reasoning is the process of drawing valid inferences.An inference is valid if its conclusion follows logically from its premises, meaning that it is impossible for the premises to be true and the conclusion to be false. Deductive logic is the discipline studying the laws of deductive reasoning.. For example, the inference from the premises "all men are mortal" and "Socrates is a man ...

  21. (PDF) Quantitative Research Method

    Quantitative research is regarded as the organized inquiry about phenomenon through collection. of numer ical data and execution of statistical, mathematical or computational techniques. The ...

  22. Anal Cancer Market Drivers: Key Manufacturers' Competitive Strategies

    The Competitive Landscape of the Anal Cancer MarketIn today's competitive business environment, the global Anal Cancer market stands as a critical battleground for businesses seeking to carve out a niche and drive growth. As industries grapple with the complexities of this market, understanding the competitive landscape becomes paramount for strategic decision-making and success.The global ...

  23. Assessment of the 'students' perceptions of education using Dundee

    Background Educational settings in professional health education establishments significantly shape students' academic, social, and emotional experiences. These environments encompass physical, psychological, and social infrastructures of programs or institutions, which jointly influence learning and development. This study analyzed the educational environment at Prince Nora University in ...

  24. Key things to know about election polls in the U.S.

    The real environment in which polls are conducted bears little resemblance to the idealized settings presented in textbooks. ... Research has found that Trump is popular among people who tend to sit out midterms but turn out for him in presidential election years. Since pollsters often use past turnout to predict who will vote, it can be ...