S371 Social Work Research - Jill Chonody: What is Quantitative Research?

  • Choosing a Topic
  • Choosing Search Terms
  • What is Quantitative Research?
  • Requesting Materials

Quantitative Research in the Social Sciences

This page is courtesy of University of Southern California: http://libguides.usc.edu/content.php?pid=83009&sid=615867

Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques . Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Muijs, Daniel. Doing Quantitative Research in Education with SPSS . 2nd edition. London: SAGE Publications, 2010.

Characteristics of Quantitative Research

Your goal in conducting quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes only associations between variables; an experimental study establishes causality.

Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numberic and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner].

Its main characteristics are :

  • The data is usually gathered using structured research instruments.
  • The results are based on larger sample sizes that are representative of the population.
  • The research study can usually be replicated or repeated, given its high reliability.
  • Researcher has a clearly defined research question to which objective answers are sought.
  • All aspects of the study are carefully designed before data is collected.
  • Data are in the form of numbers and statistics, often arranged in tables, charts, figures, or other non-textual forms.
  • Project can be used to generalize concepts more widely, predict future results, or investigate causal relationships.
  • Researcher uses tools, such as questionnaires or computer software, to collect numerical data.

The overarching aim of a quantitative research study is to classify features, count them, and construct statistical models in an attempt to explain what is observed.

  Things to keep in mind when reporting the results of a study using quantiative methods :

  • Explain the data collected and their statistical treatment as well as all relevant results in relation to the research problem you are investigating. Interpretation of results is not appropriate in this section.
  • Report unanticipated events that occurred during your data collection. Explain how the actual analysis differs from the planned analysis. Explain your handling of missing data and why any missing data does not undermine the validity of your analysis.
  • Explain the techniques you used to "clean" your data set.
  • Choose a minimally sufficient statistical procedure ; provide a rationale for its use and a reference for it. Specify any computer programs used.
  • Describe the assumptions for each procedure and the steps you took to ensure that they were not violated.
  • When using inferential statistics , provide the descriptive statistics, confidence intervals, and sample sizes for each variable as well as the value of the test statistic, its direction, the degrees of freedom, and the significance level [report the actual p value].
  • Avoid inferring causality , particularly in nonrandomized designs or without further experimentation.
  • Use tables to provide exact values ; use figures to convey global effects. Keep figures small in size; include graphic representations of confidence intervals whenever possible.
  • Always tell the reader what to look for in tables and figures .

NOTE:   When using pre-existing statistical data gathered and made available by anyone other than yourself [e.g., government agency], you still must report on the methods that were used to gather the data and describe any missing data that exists and, if there is any, provide a clear explanation why the missing datat does not undermine the validity of your final analysis.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Quantitative Research Methods . Writing@CSU. Colorado State University; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.

Basic Research Designs for Quantitative Studies

Before designing a quantitative research study, you must decide whether it will be descriptive or experimental because this will dictate how you gather, analyze, and interpret the results. A descriptive study is governed by the following rules: subjects are generally measured once; the intention is to only establish associations between variables; and, the study may include a sample population of hundreds or thousands of subjects to ensure that a valid estimate of a generalized relationship between variables has been obtained. An experimental design includes subjects measured before and after a particular treatment, the sample population may be very small and purposefully chosen, and it is intended to establish causality between variables. Introduction The introduction to a quantitative study is usually written in the present tense and from the third person point of view. It covers the following information:

  • Identifies the research problem -- as with any academic study, you must state clearly and concisely the research problem being investigated.
  • Reviews the literature -- review scholarship on the topic, synthesizing key themes and, if necessary, noting studies that have used similar methods of inquiry and analysis. Note where key gaps exist and how your study helps to fill these gaps or clarifies existing knowledge.
  • Describes the theoretical framework -- provide an outline of the theory or hypothesis underpinning your study. If necessary, define unfamiliar or complex terms, concepts, or ideas and provide the appropriate background information to place the research problem in proper context [e.g., historical, cultural, economic, etc.].

Methodology The methods section of a quantitative study should describe how each objective of your study will be achieved. Be sure to provide enough detail to enable the reader can make an informed assessment of the methods being used to obtain results associated with the research problem. The methods section should be presented in the past tense.

  • Study population and sampling -- where did the data come from; how robust is it; note where gaps exist or what was excluded. Note the procedures used for their selection;
  • Data collection – describe the tools and methods used to collect information and identify the variables being measured; describe the methods used to obtain the data; and, note if the data was pre-existing [i.e., government data] or you gathered it yourself. If you gathered it yourself, describe what type of instrument you used and why. Note that no data set is perfect--describe any limitations in methods of gathering data.
  • Data analysis -- describe the procedures for processing and analyzing the data. If appropriate, describe the specific instruments of analysis used to study each research objective, including mathematical techniques and the type of computer software used to manipulate the data.

Results The finding of your study should be written objectively and in a succinct and precise format. In quantitative studies, it is common to use graphs, tables, charts, and other non-textual elements to help the reader understand the data. Make sure that non-textual elements do not stand in isolation from the text but are being used to supplement the overall description of the results and to help clarify key points being made. Further information about how to effectively present data using charts and graphs can be found here .

  • Statistical analysis -- how did you analyze the data? What were the key findings from the data? The findings should be present in a logical, sequential order. Describe but do not interpret these trends or negative results; save that for the discussion section. The results should be presented in the past tense.

Discussion Discussions should be analytic, logical, and comprehensive. The discussion should meld together your findings in relation to those identified in the literature review, and placed within the context of the theoretical framework underpinning the study. The discussion should be presented in the present tense.

  • Interpretation of results -- reiterate the research problem being investigated and compare and contrast the findings with the research questions underlying the study. Did they affirm predicted outcomes or did the data refute it?
  • Description of trends, comparison of groups, or relationships among variables -- describe any trends that emerged from your analysis and explain all unanticipated and statistical insignificant findings.
  • Discussion of implications – what is the meaning of your results? Highlight key findings based on the overall results and note findings that you believe are important. How have the results helped fill gaps in understanding the research problem?
  • Limitations -- describe any limitations or unavoidable bias in your study and, if necessary, note why these limitations did not inhibit effective interpretation of the results.

Conclusion End your study by to summarizing the topic and provide a final comment and assessment of the study.

  • Summary of findings – synthesize the answers to your research questions. Do not report any statistical data here; just provide a narrative summary of the key findings and describe what was learned that you did not know before conducting the study.
  • Recommendations – if appropriate to the aim of the assignment, tie key findings with policy recommendations or actions to be taken in practice.
  • Future research – note the need for future research linked to your study’s limitations or to any remaining gaps in the literature that were not addressed in your study.

Black, Thomas R. Doing Quantitative Research in the Social Sciences: An Integrated Approach to Research Design, Measurement and Statistics . London: Sage, 1999; Gay,L. R. and Peter Airasain. Educational Research: Competencies for Analysis and Applications . 7th edition. Upper Saddle River, NJ: Merril Prentice Hall, 2003; Hector, Anestine.  An Overview of Quantitative Research in Compostion and TESOL . Department of English, Indiana University of Pennsylvania; Hopkins, Will G. “Quantitative Research Design.” Sportscience 4, 1 (2000); A Strategy for Writing Up Research Results . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Nenty, H. Johnson. "Writing a Quantitative Research Thesis." International Journal of Educational Science 1 (2009): 19-32; Ouyang, Ronghua (John). Basic Inquiry of Quantitative Research . Kennesaw State University.

  • << Previous: Finding Quantitative Research
  • Next: Databases >>
  • Last Updated: Jul 11, 2023 1:03 PM
  • URL: https://libguides.iun.edu/S371socialworkresearch

Social Work Research Methods That Drive the Practice

A social worker surveys a community member.

Social workers advocate for the well-being of individuals, families and communities. But how do social workers know what interventions are needed to help an individual? How do they assess whether a treatment plan is working? What do social workers use to write evidence-based policy?

Social work involves research-informed practice and practice-informed research. At every level, social workers need to know objective facts about the populations they serve, the efficacy of their interventions and the likelihood that their policies will improve lives. A variety of social work research methods make that possible.

Data-Driven Work

Data is a collection of facts used for reference and analysis. In a field as broad as social work, data comes in many forms.

Quantitative vs. Qualitative

As with any research, social work research involves both quantitative and qualitative studies.

Quantitative Research

Answers to questions like these can help social workers know about the populations they serve — or hope to serve in the future.

  • How many students currently receive reduced-price school lunches in the local school district?
  • How many hours per week does a specific individual consume digital media?
  • How frequently did community members access a specific medical service last year?

Quantitative data — facts that can be measured and expressed numerically — are crucial for social work.

Quantitative research has advantages for social scientists. Such research can be more generalizable to large populations, as it uses specific sampling methods and lends itself to large datasets. It can provide important descriptive statistics about a specific population. Furthermore, by operationalizing variables, it can help social workers easily compare similar datasets with one another.

Qualitative Research

Qualitative data — facts that cannot be measured or expressed in terms of mere numbers or counts — offer rich insights into individuals, groups and societies. It can be collected via interviews and observations.

  • What attitudes do students have toward the reduced-price school lunch program?
  • What strategies do individuals use to moderate their weekly digital media consumption?
  • What factors made community members more or less likely to access a specific medical service last year?

Qualitative research can thereby provide a textured view of social contexts and systems that may not have been possible with quantitative methods. Plus, it may even suggest new lines of inquiry for social work research.

Mixed Methods Research

Combining quantitative and qualitative methods into a single study is known as mixed methods research. This form of research has gained popularity in the study of social sciences, according to a 2019 report in the academic journal Theory and Society. Since quantitative and qualitative methods answer different questions, merging them into a single study can balance the limitations of each and potentially produce more in-depth findings.

However, mixed methods research is not without its drawbacks. Combining research methods increases the complexity of a study and generally requires a higher level of expertise to collect, analyze and interpret the data. It also requires a greater level of effort, time and often money.

The Importance of Research Design

Data-driven practice plays an essential role in social work. Unlike philanthropists and altruistic volunteers, social workers are obligated to operate from a scientific knowledge base.

To know whether their programs are effective, social workers must conduct research to determine results, aggregate those results into comprehensible data, analyze and interpret their findings, and use evidence to justify next steps.

Employing the proper design ensures that any evidence obtained during research enables social workers to reliably answer their research questions.

Research Methods in Social Work

The various social work research methods have specific benefits and limitations determined by context. Common research methods include surveys, program evaluations, needs assessments, randomized controlled trials, descriptive studies and single-system designs.

Surveys involve a hypothesis and a series of questions in order to test that hypothesis. Social work researchers will send out a survey, receive responses, aggregate the results, analyze the data, and form conclusions based on trends.

Surveys are one of the most common research methods social workers use — and for good reason. They tend to be relatively simple and are usually affordable. However, surveys generally require large participant groups, and self-reports from survey respondents are not always reliable.

Program Evaluations

Social workers ally with all sorts of programs: after-school programs, government initiatives, nonprofit projects and private programs, for example.

Crucially, social workers must evaluate a program’s effectiveness in order to determine whether the program is meeting its goals and what improvements can be made to better serve the program’s target population.

Evidence-based programming helps everyone save money and time, and comparing programs with one another can help social workers make decisions about how to structure new initiatives. Evaluating programs becomes complicated, however, when programs have multiple goal metrics, some of which may be vague or difficult to assess (e.g., “we aim to promote the well-being of our community”).

Needs Assessments

Social workers use needs assessments to identify services and necessities that a population lacks access to.

Common social work populations that researchers may perform needs assessments on include:

  • People in a specific income group
  • Everyone in a specific geographic region
  • A specific ethnic group
  • People in a specific age group

In the field, a social worker may use a combination of methods (e.g., surveys and descriptive studies) to learn more about a specific population or program. Social workers look for gaps between the actual context and a population’s or individual’s “wants” or desires.

For example, a social worker could conduct a needs assessment with an individual with cancer trying to navigate the complex medical-industrial system. The social worker may ask the client questions about the number of hours they spend scheduling doctor’s appointments, commuting and managing their many medications. After learning more about the specific client needs, the social worker can identify opportunities for improvements in an updated care plan.

In policy and program development, social workers conduct needs assessments to determine where and how to effect change on a much larger scale. Integral to social work at all levels, needs assessments reveal crucial information about a population’s needs to researchers, policymakers and other stakeholders. Needs assessments may fall short, however, in revealing the root causes of those needs (e.g., structural racism).

Randomized Controlled Trials

Randomized controlled trials are studies in which a randomly selected group is subjected to a variable (e.g., a specific stimulus or treatment) and a control group is not. Social workers then measure and compare the results of the randomized group with the control group in order to glean insights about the effectiveness of a particular intervention or treatment.

Randomized controlled trials are easily reproducible and highly measurable. They’re useful when results are easily quantifiable. However, this method is less helpful when results are not easily quantifiable (i.e., when rich data such as narratives and on-the-ground observations are needed).

Descriptive Studies

Descriptive studies immerse the researcher in another context or culture to study specific participant practices or ways of living. Descriptive studies, including descriptive ethnographic studies, may overlap with and include other research methods:

  • Informant interviews
  • Census data
  • Observation

By using descriptive studies, researchers may glean a richer, deeper understanding of a nuanced culture or group on-site. The main limitations of this research method are that it tends to be time-consuming and expensive.

Single-System Designs

Unlike most medical studies, which involve testing a drug or treatment on two groups — an experimental group that receives the drug/treatment and a control group that does not — single-system designs allow researchers to study just one group (e.g., an individual or family).

Single-system designs typically entail studying a single group over a long period of time and may involve assessing the group’s response to multiple variables.

For example, consider a study on how media consumption affects a person’s mood. One way to test a hypothesis that consuming media correlates with low mood would be to observe two groups: a control group (no media) and an experimental group (two hours of media per day). When employing a single-system design, however, researchers would observe a single participant as they watch two hours of media per day for one week and then four hours per day of media the next week.

These designs allow researchers to test multiple variables over a longer period of time. However, similar to descriptive studies, single-system designs can be fairly time-consuming and costly.

Learn More About Social Work Research Methods

Social workers have the opportunity to improve the social environment by advocating for the vulnerable — including children, older adults and people with disabilities — and facilitating and developing resources and programs.

Learn more about how you can earn your  Master of Social Work online at Virginia Commonwealth University . The highest-ranking school of social work in Virginia, VCU has a wide range of courses online. That means students can earn their degrees with the flexibility of learning at home. Learn more about how you can take your career in social work further with VCU.

From M.S.W. to LCSW: Understanding Your Career Path as a Social Worker

How Palliative Care Social Workers Support Patients With Terminal Illnesses

How to Become a Social Worker in Health Care

Gov.uk, Mixed Methods Study

MVS Open Press, Foundations of Social Work Research

Open Social Work Education, Scientific Inquiry in Social Work

Open Social Work, Graduate Research Methods in Social Work: A Project-Based Approach

Routledge, Research for Social Workers: An Introduction to Methods

SAGE Publications, Research Methods for Social Work: A Problem-Based Approach

Theory and Society, Mixed Methods Research: What It Is and What It Could Be

READY TO GET STARTED WITH OUR ONLINE M.S.W. PROGRAM FORMAT?

Bachelor’s degree is required.

VCU Program Helper

This AI chatbot provides automated responses, which may not always be accurate. By continuing with this conversation, you agree that the contents of this chat session may be transcribed and retained. You also consent that this chat session and your interactions, including cookie usage, are subject to our privacy policy .

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation

Social Work

  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Social Work Research Methods

Introduction.

  • History of Social Work Research Methods
  • Feasibility Issues Influencing the Research Process
  • Measurement Methods
  • Existing Scales
  • Group Experimental and Quasi-Experimental Designs for Evaluating Outcome
  • Single-System Designs for Evaluating Outcome
  • Program Evaluation
  • Surveys and Sampling
  • Introductory Statistics Texts
  • Advanced Aspects of Inferential Statistics
  • Qualitative Research Methods
  • Qualitative Data Analysis
  • Historical Research Methods
  • Meta-Analysis and Systematic Reviews
  • Research Ethics
  • Culturally Competent Research Methods
  • Teaching Social Work Research Methods

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Community-Based Participatory Research
  • Economic Evaluation
  • Evidence-based Social Work Practice
  • Evidence-based Social Work Practice: Finding Evidence
  • Evidence-based Social Work Practice: Issues, Controversies, and Debates
  • Experimental and Quasi-Experimental Designs
  • Impact of Emerging Technology in Social Work Practice
  • Implementation Science and Practice
  • Interviewing
  • Measurement, Scales, and Indices
  • Meta-analysis
  • Occupational Social Work
  • Postmodernism and Social Work
  • Qualitative Research
  • Research, Best Practices, and Evidence-based Group Work
  • Social Intervention Research
  • Social Work Profession
  • Systematic Review Methods
  • Technology for Social Work Interventions

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Abolitionist Perspectives in Social Work
  • Randomized Controlled Trials in Social Work
  • Social Work Practice with Transgender and Gender Expansive Youth
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Social Work Research Methods by Allen Rubin LAST REVIEWED: 14 December 2009 LAST MODIFIED: 14 December 2009 DOI: 10.1093/obo/9780195389678-0008

Social work research means conducting an investigation in accordance with the scientific method. The aim of social work research is to build the social work knowledge base in order to solve practical problems in social work practice or social policy. Investigating phenomena in accordance with the scientific method requires maximal adherence to empirical principles, such as basing conclusions on observations that have been gathered in a systematic, comprehensive, and objective fashion. The resources in this entry discuss how to do that as well as how to utilize and teach research methods in social work. Other professions and disciplines commonly produce applied research that can guide social policy or social work practice. Yet no commonly accepted distinction exists at this time between social work research methods and research methods in allied fields relevant to social work. Consequently useful references pertaining to research methods in allied fields that can be applied to social work research are included in this entry.

This section includes basic textbooks that are used in courses on social work research methods. Considerable variation exists between textbooks on the broad topic of social work research methods. Some are comprehensive and delve into topics deeply and at a more advanced level than others. That variation is due in part to the different needs of instructors at the undergraduate and graduate levels of social work education. Most instructors at the undergraduate level prefer shorter and relatively simplified texts; however, some instructors teaching introductory master’s courses on research prefer such texts too. The texts in this section that might best fit their preferences are by Yegidis and Weinbach 2009 and Rubin and Babbie 2007 . The remaining books might fit the needs of instructors at both levels who prefer a more comprehensive and deeper coverage of research methods. Among them Rubin and Babbie 2008 is perhaps the most extensive and is often used at the doctoral level as well as the master’s and undergraduate levels. Also extensive are Drake and Jonson-Reid 2007 , Grinnell and Unrau 2007 , Kreuger and Neuman 2006 , and Thyer 2001 . What distinguishes Drake and Jonson-Reid 2007 is its heavy inclusion of statistical and Statistical Package for the Social Sciences (SPSS) content integrated with each chapter. Grinnell and Unrau 2007 and Thyer 2001 are unique in that they are edited volumes with different authors for each chapter. Kreuger and Neuman 2006 takes Neuman’s social sciences research text and adapts it to social work. The Practitioner’s Guide to Using Research for Evidence-based Practice ( Rubin 2007 ) emphasizes the critical appraisal of research, covering basic research methods content in a relatively simplified format for instructors who want to teach research methods as part of the evidence-based practice process instead of with the aim of teaching students how to produce research.

Drake, Brett, and Melissa Jonson-Reid. 2007. Social work research methods: From conceptualization to dissemination . Boston: Allyn and Bacon.

This introductory text is distinguished by its use of many evidence-based practice examples and its heavy coverage of statistical and computer analysis of data.

Grinnell, Richard M., and Yvonne A. Unrau, eds. 2007. Social work research and evaluation: Quantitative and qualitative approaches . 8th ed. New York: Oxford Univ. Press.

Contains chapters written by different authors, each focusing on a comprehensive range of social work research topics.

Kreuger, Larry W., and W. Lawrence Neuman. 2006. Social work research methods: Qualitative and quantitative applications . Boston: Pearson, Allyn, and Bacon.

An adaptation to social work of Neuman's social sciences research methods text. Its framework emphasizes comparing quantitative and qualitative approaches. Despite its title, quantitative methods receive more attention than qualitative methods, although it does contain considerable qualitative content.

Rubin, Allen. 2007. Practitioner’s guide to using research for evidence-based practice . Hoboken, NJ: Wiley.

This text focuses on understanding quantitative and qualitative research methods and designs for the purpose of appraising research as part of the evidence-based practice process. It also includes chapters on instruments for assessment and monitoring practice outcomes. It can be used at the graduate or undergraduate level.

Rubin, Allen, and Earl R. Babbie. 2007. Essential research methods for social work . Belmont, CA: Thomson Brooks Cole.

This is a shorter and less advanced version of Rubin and Babbie 2008 . It can be used for research methods courses at the undergraduate or master's levels of social work education.

Rubin, Allen, and Earl R. Babbie. Research Methods for Social Work . 6th ed. Belmont, CA: Thomson Brooks Cole, 2008.

This comprehensive text focuses on producing quantitative and qualitative research as well as utilizing such research as part of the evidence-based practice process. It is widely used for teaching research methods courses at the undergraduate, master’s, and doctoral levels of social work education.

Thyer, Bruce A., ed. 2001 The handbook of social work research methods . Thousand Oaks, CA: Sage.

This comprehensive compendium includes twenty-nine chapters written by esteemed leaders in social work research. It covers quantitative and qualitative methods as well as general issues.

Yegidis, Bonnie L., and Robert W. Weinbach. 2009. Research methods for social workers . 6th ed. Boston: Allyn and Bacon.

This introductory paperback text covers a broad range of social work research methods and does so in a briefer fashion than most lengthier, hardcover introductory research methods texts.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Social Work »
  • Meet the Editorial Board »
  • Adolescent Depression
  • Adolescent Pregnancy
  • Adolescents
  • Adoption Home Study Assessments
  • Adult Protective Services in the United States
  • African Americans
  • Aging out of foster care
  • Aging, Physical Health and
  • Alcohol and Drug Abuse Problems
  • Alcohol and Drug Problems, Prevention of Adolescent and Yo...
  • Alcohol Problems: Practice Interventions
  • Alcohol Use Disorder
  • Alzheimer's Disease and Other Dementias
  • Anti-Oppressive Practice
  • Asian Americans
  • Asian-American Youth
  • Autism Spectrum Disorders
  • Baccalaureate Social Workers
  • Behavioral Health
  • Behavioral Social Work Practice
  • Bereavement Practice
  • Bisexuality
  • Brief Therapies in Social Work: Task-Centered Model and So...
  • Bullying and Social Work Intervention
  • Canadian Social Welfare, History of
  • Case Management in Mental Health in the United States
  • Central American Migration to the United States
  • Child Maltreatment Prevention
  • Child Neglect and Emotional Maltreatment
  • Child Poverty
  • Child Sexual Abuse
  • Child Welfare
  • Child Welfare and Child Protection in Europe, History of
  • Child Welfare and Parents with Intellectual and/or Develop...
  • Child Welfare Effectiveness
  • Child Welfare, Immigration and
  • Child Welfare Practice with LGBTQ Youth and Families
  • Children of Incarcerated Parents
  • Christianity and Social Work
  • Chronic Illness
  • Clinical Social Work Practice with Adult Lesbians
  • Clinical Social Work Practice with Males
  • Cognitive Behavior Therapies with Diverse and Stressed Pop...
  • Cognitive Processing Therapy
  • Cognitive-Behavioral Therapy
  • Community Development
  • Community Policing
  • Community-Needs Assessment
  • Comparative Social Work
  • Computational Social Welfare: Applying Data Science in Soc...
  • Conflict Resolution
  • Council on Social Work Education
  • Counseling Female Offenders
  • Criminal Justice
  • Crisis Interventions
  • Cultural Competence and Ethnic Sensitive Practice
  • Culture, Ethnicity, Substance Use, and Substance Use Disor...
  • Dementia Care
  • Dementia Care, Ethical Aspects of
  • Depression and Cancer
  • Development and Infancy (Birth to Age Three)
  • Differential Response in Child Welfare
  • Digital Storytelling for Social Work Interventions
  • Direct Practice in Social Work
  • Disabilities
  • Disability and Disability Culture
  • Domestic Violence Among Immigrants
  • Early Pregnancy and Parenthood Among Child Welfare–Involve...
  • Eating Disorders
  • Ecological Framework
  • Elder Mistreatment
  • End-of-Life Decisions
  • Epigenetics for Social Workers
  • Ethical Issues in Social Work and Technology
  • Ethics and Values in Social Work
  • European Institutions and Social Work
  • European Union, Justice and Home Affairs in the
  • Evidence-based Social Work Practice: Issues, Controversies...
  • Families with Gay, Lesbian, or Bisexual Parents
  • Family Caregiving
  • Family Group Conferencing
  • Family Policy
  • Family Services
  • Family Therapy
  • Family Violence
  • Fathering Among Families Served By Child Welfare
  • Fetal Alcohol Spectrum Disorders
  • Field Education
  • Financial Literacy and Social Work
  • Financing Health-Care Delivery in the United States
  • Forensic Social Work
  • Foster Care
  • Foster care and siblings
  • Gender, Violence, and Trauma in Immigration Detention in t...
  • Generalist Practice and Advanced Generalist Practice
  • Grounded Theory
  • Group Work across Populations, Challenges, and Settings
  • Group Work, Research, Best Practices, and Evidence-based
  • Harm Reduction
  • Health Care Reform
  • Health Disparities
  • Health Social Work
  • History of Social Work and Social Welfare, 1900–1950
  • History of Social Work and Social Welfare, 1950-1980
  • History of Social Work and Social Welfare, pre-1900
  • History of Social Work from 1980-2014
  • History of Social Work in China
  • History of Social Work in Northern Ireland
  • History of Social Work in the Republic of Ireland
  • History of Social Work in the United Kingdom
  • HIV/AIDS and Children
  • HIV/AIDS Prevention with Adolescents
  • Homelessness
  • Homelessness: Ending Homelessness as a Grand Challenge
  • Homelessness Outside the United States
  • Human Needs
  • Human Trafficking, Victims of
  • Immigrant Integration in the United States
  • Immigrant Policy in the United States
  • Immigrants and Refugees
  • Immigrants and Refugees: Evidence-based Social Work Practi...
  • Immigration and Health Disparities
  • Immigration and Intimate Partner Violence
  • Immigration and Poverty
  • Immigration and Spirituality
  • Immigration and Substance Use
  • Immigration and Trauma
  • Impaired Professionals
  • Indigenous Peoples
  • Individual Placement and Support (IPS) Supported Employmen...
  • In-home Child Welfare Services
  • Intergenerational Transmission of Maltreatment
  • International Human Trafficking
  • International Social Welfare
  • International Social Work
  • International Social Work and Education
  • International Social Work and Social Welfare in Southern A...
  • Internet and Video Game Addiction
  • Interpersonal Psychotherapy
  • Intervention with Traumatized Populations
  • Intimate-Partner Violence
  • Juvenile Justice
  • Kinship Care
  • Korean Americans
  • Latinos and Latinas
  • Law, Social Work and the
  • LGBTQ Populations and Social Work
  • Mainland European Social Work, History of
  • Major Depressive Disorder
  • Management and Administration in Social Work
  • Maternal Mental Health
  • Medical Illness
  • Men: Health and Mental Health Care
  • Mental Health
  • Mental Health Diagnosis and the Addictive Substance Disord...
  • Mental Health Needs of Older People, Assessing the
  • Mental Health Services from 1990 to 2023
  • Mental Illness: Children
  • Mental Illness: Elders
  • Microskills
  • Middle East and North Africa, International Social Work an...
  • Military Social Work
  • Mixed Methods Research
  • Moral distress and injury in social work
  • Motivational Interviewing
  • Multiculturalism
  • Native Americans
  • Native Hawaiians and Pacific Islanders
  • Neighborhood Social Cohesion
  • Neuroscience and Social Work
  • Nicotine Dependence
  • Organizational Development and Change
  • Pain Management
  • Palliative Care
  • Palliative Care: Evolution and Scope of Practice
  • Pandemics and Social Work
  • Parent Training
  • Personalization
  • Person-in-Environment
  • Philosophy of Science and Social Work
  • Physical Disabilities
  • Podcasts and Social Work
  • Police Social Work
  • Political Social Work in the United States
  • Positive Youth Development
  • Postsecondary Education Experiences and Attainment Among Y...
  • Post-Traumatic Stress Disorder (PTSD)
  • Practice Interventions and Aging
  • Practice Interventions with Adolescents
  • Practice Research
  • Primary Prevention in the 21st Century
  • Productive Engagement of Older Adults
  • Profession, Social Work
  • Program Development and Grant Writing
  • Promoting Smart Decarceration as a Grand Challenge
  • Psychiatric Rehabilitation
  • Psychoanalysis and Psychodynamic Theory
  • Psychoeducation
  • Psychometrics
  • Psychopathology and Social Work Practice
  • Psychopharmacology and Social Work Practice
  • Psychosocial Framework
  • Psychosocial Intervention with Women
  • Psychotherapy and Social Work
  • Race and Racism
  • Readmission Policies in Europe
  • Redefining Police Interactions with People Experiencing Me...
  • Refugee Children, Unaccompanied Immigrant and
  • Rehabilitation
  • Religiously Affiliated Agencies
  • Reproductive Health
  • Restorative Justice
  • Risk Assessment in Child Protection Services
  • Risk Management in Social Work
  • Rural Social Work in China
  • Rural Social Work Practice
  • School Social Work
  • School Violence
  • School-Based Delinquency Prevention
  • Services and Programs for Pregnant and Parenting Youth
  • Severe and Persistent Mental Illness: Adults
  • Sexual and Gender Minority Immigrants, Refugees, and Asylu...
  • Sexual Assault
  • Single-System Research Designs
  • Social and Economic Impact of US Immigration Policies on U...
  • Social Development
  • Social Insurance and Social Justice
  • Social Justice and Social Work
  • Social Movements
  • Social Planning
  • Social Policy
  • Social Policy in Denmark
  • Social Security in the United States (OASDHI)
  • Social Work and Islam
  • Social Work and Social Welfare in East, West, and Central ...
  • Social Work and Social Welfare in Europe
  • Social Work Education and Research
  • Social Work Leadership
  • Social Work Luminaries: Luminaries Contributing to the Cla...
  • Social Work Luminaries: Luminaries contributing to the fou...
  • Social Work Luminaries: Luminaries Who Contributed to Soci...
  • Social Work Practice, Rare and Orphan Diseases and
  • Social Work Regulation
  • Social Work Research Methods
  • Social Work with Interpreters
  • Solution-Focused Therapy
  • Strategic Planning
  • Strengths Perspective
  • Strengths-Based Models in Social Work
  • Supplemental Security Income
  • Survey Research
  • Sustainability: Creating Social Responses to a Changing En...
  • Syrian Refugees in Turkey
  • Task-Centered Practice
  • Technology Adoption in Social Work Education
  • Technology, Human Relationships, and Human Interaction
  • Technology in Social Work
  • Terminal Illness
  • The Impact of Systemic Racism on Latinxs’ Experiences with...
  • Transdisciplinary Science
  • Translational Science and Social Work
  • Transnational Perspectives in Social Work
  • Transtheoretical Model of Change
  • Trauma-Informed Care
  • Triangulation
  • Tribal child welfare practice in the United States
  • United States, History of Social Welfare in the
  • Universal Basic Income
  • Veteran Services
  • Vicarious Trauma and Resilience in Social Work Practice wi...
  • Vicarious Trauma Redefining PTSD
  • Victim Services
  • Virtual Reality and Social Work
  • Welfare State Reform in France
  • Welfare State Theory
  • Women and Macro Social Work Practice
  • Women's Health Care
  • Work and Family in the German Welfare State
  • Workforce Development of Social Workers Pre- and Post-Empl...
  • Working with Non-Voluntary and Mandated Clients
  • Young and Adolescent Lesbians
  • Youth at Risk
  • Youth Services
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [185.126.86.119]
  • 185.126.86.119
  • Search Menu

Sign in through your institution

  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The British Journal of Social Work
  • About the British Association of Social Workers
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

  • < Previous

Quantitative Research Methods for Social Work: Making Social Work Count, Barbra Teater, John Devaney, Donald Forester, Jonathan Scourfield and John Carpenter

  • Article contents
  • Figures & tables
  • Supplementary Data

Hugh McLaughlin, Quantitative Research Methods for Social Work: Making Social Work Count, Barbra Teater, John Devaney, Donald Forester, Jonathan Scourfield and John Carpenter, The British Journal of Social Work , Volume 52, Issue 3, April 2022, Pages 1793–1795, https://doi.org/10.1093/bjsw/bcaa116

  • Permissions Icon Permissions

I remember sharing a lift with a Professor from America at a joint IFSW/IASSW World Social Work Conference who taught research methods. After chatting about teaching research methods, he informed me gleefully that his students are taught qualitative methods first. However, after they get to him, none of them leave his classroom without being ‘converted to quantitative methods’! At this point, he got off the lift leaving our discussion in the air.

This book arose from funding from the Economic and Social Research Council to address the quantitative skills gap in the social sciences. The grants were applied for under the auspices of the Joint University Council Social Work Education Committee to upskill social work academics and develop a curriculum resource with teaching aids. I was saddened to discover that many of the free resources are no longer available and wondered if anything could be done to remedy this.

The book is unusual for the UK in that its major focus is on quantitative methods unlike other social work research methods books which tend to cover both qualitative and quantitative methods ( Campbell et al. , 2017; Smith, 2009). Until this book came along many of us will have been happy using non-social work research methods texts to learn about quantitative methods ( Bryman, 2015). This authoritative text offers a fresh and imaginative approach to teaching quantitative methods. It is set out in an incremental and easily accessible format with a series of exercises and critical thinking boxes with suggested readings at the end of each chapter. The exercises and critical thinking questions are well thought out with a full answer to each at the back of the book—thus making it really useful for those of us who teach research methods. It is also aimed at social work academics, social work students and practitioners who want to learn more about quantitative approaches, where they are useful, how they can be read and understood, and how they can be applied to their setting.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Short-term Access

To purchase short-term access, please sign in to your personal account above.

Don't already have a personal account? Register

Month: Total Views:
August 2020 14
September 2020 18
October 2020 21
November 2020 31
December 2020 22
January 2021 22
February 2021 16
March 2021 16
April 2021 27
May 2021 56
June 2021 49
July 2021 16
August 2021 14
September 2021 16
October 2021 12
November 2021 13
December 2021 24
January 2022 37
February 2022 11
March 2022 18
April 2022 44
May 2022 21
June 2022 31
July 2022 21
August 2022 29
September 2022 26
October 2022 42
November 2022 42
December 2022 20
January 2023 25
February 2023 9
March 2023 65
April 2023 32
May 2023 15
June 2023 6
July 2023 33
August 2023 17
September 2023 15
October 2023 15
November 2023 21
December 2023 14
January 2024 22
February 2024 17
March 2024 18
April 2024 9
May 2024 7
June 2024 14
July 2024 12
August 2024 10

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-263X
  • Print ISSN 0045-3102
  • Copyright © 2024 British Association of Social Workers
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Tools and Resources
  • Customer Services
  • Addictions and Substance Use
  • Administration and Management
  • Aging and Older Adults
  • Biographies
  • Children and Adolescents
  • Clinical and Direct Practice
  • Couples and Families
  • Criminal Justice
  • Disabilities
  • Ethics and Values
  • Gender and Sexuality
  • Health Care and Illness
  • Human Behavior
  • International and Global Issues
  • Macro Practice
  • Mental and Behavioral Health
  • Policy and Advocacy
  • Populations and Practice Settings
  • Race, Ethnicity, and Culture
  • Religion and Spirituality
  • Research and Evidence-Based Practice
  • Social Justice and Human Rights
  • Social Work Profession
  • Share This Facebook LinkedIn Twitter

Article contents

Quantitative research.

  • Shenyang Guo Shenyang Guo Wallace H. Kuralt Distinguished Professor, School of Social Work, University of North Carolina at Chapel Hill
  • https://doi.org/10.1093/acrefore/9780199975839.013.333
  • Published online: 11 June 2013

This entry describes the definition, history, theories, and applications of quantitative methods in social work research. Unlike qualitative research, quantitative research emphasizes precise, objective, and generalizable findings. Quantitative methods are based on numerous probability and statistical theories, with rigorous proofs and support from both simulated and empirical data. Regression analysis plays a paramountly important role in contemporary statistical methods, which include event history analysis, generalized linear modeling, hierarchical linear modeling, propensity score matching, and structural equation modeling. Quantitative methods can be employed in all stages of a scientific inquiry ranging from sample selection to final data analysis.

  • event history analysis
  • generalized linear modeling
  • hierarchical linear modeling
  • propensity score matching
  • structural equation modeling

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Encyclopedia of Social Work. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 04 September 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.126.86.119]
  • 185.126.86.119

Character limit 500 /500

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Quantitative Research? | Definition, Uses & Methods

What Is Quantitative Research? | Definition, Uses & Methods

Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Quantitative research methods
Research method How to use Example
Control or manipulate an to measure its effect on a dependent variable. To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention.
Ask questions of a group of people in-person, over-the-phone or online. You distribute with rating scales to first-year international college students to investigate their experiences of culture shock.
(Systematic) observation Identify a behavior or occurrence of interest and monitor it in its natural setting. To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds.
Secondary research Collect data that has been gathered for other purposes e.g., national surveys or historical records. To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available .

Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, what is your plagiarism score.

Banner Image

Quantitative and Qualitative Research

  • I NEED TO . . .

What is Quantitative Research?

  • What is Qualitative Research?
  • Quantitative vs Qualitative
  • Step 1: Accessing CINAHL
  • Step 2: Create a Keyword Search
  • Step 3: Create a Subject Heading Search
  • Step 4: Repeat Steps 1-3 for Second Concept
  • Step 5: Repeat Steps 1-3 for Quantitative Terms
  • Step 6: Combining All Searches
  • Step 7: Adding Limiters
  • Step 8: Save Your Search!
  • What Kind of Article is This?
  • More Research Help This link opens in a new window

Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns . Quantitative research gathers a range of numeric data. Some of the numeric data is intrinsically quantitative (e.g. personal income), while in other cases the numeric structure is  imposed (e.g. ‘On a scale from 1 to 10, how depressed did you feel last week?’). The collection of quantitative information allows researchers to conduct simple to extremely sophisticated statistical analyses that aggregate the data (e.g. averages, percentages), show relationships among the data (e.g. ‘Students with lower grade point averages tend to score lower on a depression scale’) or compare across aggregated data (e.g. the USA has a higher gross domestic product than Spain). Quantitative research includes methodologies such as questionnaires, structured observations or experiments and stands in contrast to qualitative research. Qualitative research involves the collection and analysis of narratives and/or open-ended observations through methodologies such as interviews, focus groups or ethnographies.

Coghlan, D., Brydon-Miller, M. (2014).  The SAGE encyclopedia of action research  (Vols. 1-2). London, : SAGE Publications Ltd doi: 10.4135/9781446294406

What is the purpose of quantitative research?

The purpose of quantitative research is to generate knowledge and create understanding about the social world. Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a way to learn about a particular group of people, known as a sample population. Using scientific inquiry, quantitative research relies on data that are observed or measured to examine questions about the sample population.

Allen, M. (2017).  The SAGE encyclopedia of communication research methods  (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc doi: 10.4135/9781483381411

How do I know if the study is a quantitative design?  What type of quantitative study is it?

Quantitative Research Designs: Descriptive non-experimental, Quasi-experimental or Experimental?

Studies do not always explicitly state what kind of research design is being used.  You will need to know how to decipher which design type is used.  The following video will help you determine the quantitative design type.

  • << Previous: I NEED TO . . .
  • Next: What is Qualitative Research? >>
  • Last Updated: Aug 19, 2024 2:09 PM
  • URL: https://libguides.uta.edu/quantitative_and_qualitative_research

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

Library Home

A Quick Guide to Quantitative Research in the Social Sciences

(12 reviews)

what is quantitative research in social work

Christine Davies, Carmarthen, Wales

Copyright Year: 2020

Last Update: 2021

Publisher: University of Wales Trinity Saint David

Language: English

Formats Available

Conditions of use.

Attribution-NonCommercial

Learn more about reviews.

what is quantitative research in social work

Reviewed by Jennifer Taylor, Assistant Professor, Texas A&M University-Corpus Christi on 4/18/24

This resource is a quick guide to quantitative research in the social sciences and not a comprehensive resource. It provides a VERY general overview of quantitative research but offers a good starting place for students new to research. It... read more

Comprehensiveness rating: 4 see less

This resource is a quick guide to quantitative research in the social sciences and not a comprehensive resource. It provides a VERY general overview of quantitative research but offers a good starting place for students new to research. It offers links and references to additional resources that are more comprehensive in nature.

Content Accuracy rating: 4

The content is relatively accurate. The measurement scale section is very sparse. Not all types of research designs or statistical methods are included, but it is a guide, so details are meant to be limited.

Relevance/Longevity rating: 4

The examples were interesting and appropriate. The content is up to date and will be useful for several years.

Clarity rating: 5

The text was clearly written. Tables and figures are not referenced in the text, which would have been nice.

Consistency rating: 5

The framework is consistent across chapters with terminology clearly highlighted and defined.

Modularity rating: 5

The chapters are subdivided into section that can be divided and assigned as reading in a course. Most chapters are brief and concise, unless elaboration is necessary, such as with the data analysis chapter. Again, this is a guide and not a comprehensive text, so sections are shorter and don't always include every subtopic that may be considered.

Organization/Structure/Flow rating: 5

The guide is well organized. I appreciate that the topics are presented in a logical and clear manner. The topics are provided in an order consistent with traditional research methods.

Interface rating: 5

The interface was easy to use and navigate. The images were clear and easy to read.

Grammatical Errors rating: 5

I did not notice any grammatical errors.

Cultural Relevance rating: 5

The materials are not culturally insensitive or offensive in any way.

I teach a Marketing Research course to undergraduates. I would consider using some of the chapters or topics included, especially the overview of the research designs and the analysis of data section.

Reviewed by Tiffany Kindratt, Assistant Professor, University of Texas at Arlington on 3/9/24

The text provides a brief overview of quantitative research topics that is geared towards research in the fields of education, sociology, business, and nursing. The author acknowledges that the textbook is not a comprehensive resource but offers... read more

Comprehensiveness rating: 3 see less

The text provides a brief overview of quantitative research topics that is geared towards research in the fields of education, sociology, business, and nursing. The author acknowledges that the textbook is not a comprehensive resource but offers references to other resources that can be used to deepen the knowledge. The text does not include a glossary or index. The references in the figures for each chapter are not included in the reference section. It would be helpful to include those.

Overall, the text is accurate. For example, Figure 1 on page 6 provides a clear overview of the research process. It includes general definitions of primary and secondary research. It would be helpful to include more details to explain some of the examples before they are presented. For instance, the example on page 5 was unclear how it pertains to the literature review section.

In general, the text is relevant and up-to-date. The text includes many inferences of moving from qualitative to quantitative analysis. This was surprising to me as a quantitative researcher. The author mentions that moving from a qualitative to quantitative approach should only be done when needed. As a predominantly quantitative researcher, I would not advice those interested in transitioning to using a qualitative approach that qualitative research would enhance their research—not something that should only be done if you have to.

Clarity rating: 4

The text is written in a clear manner. It would be helpful to the reader if there was a description of the tables and figures in the text before they are presented.

Consistency rating: 4

The framework for each chapter and terminology used are consistent.

Modularity rating: 4

The text is clearly divided into sections within each chapter. Overall, the chapters are a similar brief length except for the chapter on data analysis, which is much more comprehensive than others.

Organization/Structure/Flow rating: 4

The topics in the text are presented in a clear and logical order. The order of the text follows the conventional research methodology in social sciences.

I did not encounter any interface issues when reviewing this text. All links worked and there were no distortions of the images or charts that may confuse the reader.

Grammatical Errors rating: 3

There are some grammatical/typographical errors throughout. Of note, for Section 5 in the table of contents. “The” should be capitalized to start the title. In the title for Table 3, the “t” in typical should be capitalized.

Cultural Relevance rating: 4

The examples are culturally relevant. The text is geared towards learners in the UK, but examples are relevant for use in other countries (i.e., United States). I did not see any examples that may be considered culturally insensitive or offensive in any way.

I teach a course on research methods in a Bachelor of Science in Public Health program. I would consider using some of the text, particularly in the analysis chapter to supplement the current textbook in the future.

Reviewed by Finn Bell, Assistant Professor, University of Michigan, Dearborn on 1/3/24

For it being a quick guide and only 26 pages, it is very comprehensive, but it does not include an index or glossary. read more

For it being a quick guide and only 26 pages, it is very comprehensive, but it does not include an index or glossary.

Content Accuracy rating: 5

As far as I can tell, the text is accurate, error-free and unbiased.

Relevance/Longevity rating: 5

This text is up-to-date, and given the content, unlikely to become obsolete any time soon.

The text is very clear and accessible.

The text is internally consistent.

Given how short the text is, it seems unnecessary to divide it into smaller readings, nonetheless, it is clearly labelled such that an instructor could do so.

The text is well-organized and brings readers through basic quantitative methods in a logical, clear fashion.

Easy to navigate. Only one table that is split between pages, but not in a way that is confusing.

There were no noticeable grammatical errors.

The examples in this book don't give enough information to rate this effectively.

This text is truly a very quick guide at only 26 double-spaced pages. Nonetheless, Davies packs a lot of information on the basics of quantitative research methods into this text, in an engaging way with many examples of the concepts presented. This guide is more of a brief how-to that takes readers as far as how to select statistical tests. While it would be impossible to fully learn quantitative research from such a short text, of course, this resource provides a great introduction, overview, and refresher for program evaluation courses.

Reviewed by Shari Fedorowicz, Adjunct Professor, Bridgewater State University on 12/16/22

The text is indeed a quick guide for utilizing quantitative research. Appropriate and effective examples and diagrams were used throughout the text. The author clearly differentiates between use of quantitative and qualitative research providing... read more

Comprehensiveness rating: 5 see less

The text is indeed a quick guide for utilizing quantitative research. Appropriate and effective examples and diagrams were used throughout the text. The author clearly differentiates between use of quantitative and qualitative research providing the reader with the ability to distinguish two terms that frequently get confused. In addition, links and outside resources are provided to deepen the understanding as an option for the reader. The use of these links, coupled with diagrams and examples make this text comprehensive.

The content is mostly accurate. Given that it is a quick guide, the author chose a good selection of which types of research designs to include. However, some are not provided. For example, correlational or cross-correlational research is omitted and is not discussed in Section 3, but is used as a statistical example in the last section.

Examples utilized were appropriate and associated with terms adding value to the learning. The tables that included differentiation between types of statistical tests along with a parametric/nonparametric table were useful and relevant.

The purpose to the text and how to use this guide book is stated clearly and is established up front. The author is also very clear regarding the skill level of the user. Adding to the clarity are the tables with terms, definitions, and examples to help the reader unpack the concepts. The content related to the terms was succinct, direct, and clear. Many times examples or figures were used to supplement the narrative.

The text is consistent throughout from contents to references. Within each section of the text, the introductory paragraph under each section provides a clear understanding regarding what will be discussed in each section. The layout is consistent for each section and easy to follow.

The contents are visible and address each section of the text. A total of seven sections, including a reference section, is in the contents. Each section is outlined by what will be discussed in the contents. In addition, within each section, a heading is provided to direct the reader to the subtopic under each section.

The text is well-organized and segues appropriately. I would have liked to have seen an introductory section giving a narrative overview of what is in each section. This would provide the reader with the ability to get a preliminary glimpse into each upcoming sections and topics that are covered.

The book was easy to navigate and well-organized. Examples are presented in one color, links in another and last, figures and tables. The visuals supplemented the reading and placed appropriately. This provides an opportunity for the reader to unpack the reading by use of visuals and examples.

No significant grammatical errors.

The text is not offensive or culturally insensitive. Examples were inclusive of various races, ethnicities, and backgrounds.

This quick guide is a beneficial text to assist in unpacking the learning related to quantitative statistics. I would use this book to complement my instruction and lessons, or use this book as a main text with supplemental statistical problems and formulas. References to statistical programs were appropriate and were useful. The text did exactly what was stated up front in that it is a direct guide to quantitative statistics. It is well-written and to the point with content areas easy to locate by topic.

Reviewed by Sarah Capello, Assistant Professor, Radford University on 1/18/22

The text claims to provide "quick and simple advice on quantitative aspects of research in social sciences," which it does. There is no index or glossary, although vocabulary words are bolded and defined throughout the text. read more

The text claims to provide "quick and simple advice on quantitative aspects of research in social sciences," which it does. There is no index or glossary, although vocabulary words are bolded and defined throughout the text.

The content is mostly accurate. I would have preferred a few nuances to be hashed out a bit further to avoid potential reader confusion or misunderstanding of the concepts presented.

The content is current; however, some of the references cited in the text are outdated. Newer editions of those texts exist.

The text is very accessible and readable for a variety of audiences. Key terms are well-defined.

There are no content discrepancies within the text. The author even uses similarly shaped graphics for recurring purposes throughout the text (e.g., arrow call outs for further reading, rectangle call outs for examples).

The content is chunked nicely by topics and sections. If it were used for a course, it would be easy to assign different sections of the text for homework, etc. without confusing the reader if the instructor chose to present the content in a different order.

The author follows the structure of the research process. The organization of the text is easy to follow and comprehend.

All of the supplementary images (e.g., tables and figures) were beneficial to the reader and enhanced the text.

There are no significant grammatical errors.

I did not find any culturally offensive or insensitive references in the text.

This text does the difficult job of introducing the complicated concepts and processes of quantitative research in a quick and easy reference guide fairly well. I would not depend solely on this text to teach students about quantitative research, but it could be a good jumping off point for those who have no prior knowledge on this subject or those who need a gentle introduction before diving in to more advanced and complex readings of quantitative research methods.

Reviewed by J. Marlie Henry, Adjunct Faculty, University of Saint Francis on 12/9/21

Considering the length of this guide, this does a good job of addressing major areas that typically need to be addressed. There is a contents section. The guide does seem to be organized accordingly with appropriate alignment and logical flow of... read more

Considering the length of this guide, this does a good job of addressing major areas that typically need to be addressed. There is a contents section. The guide does seem to be organized accordingly with appropriate alignment and logical flow of thought. There is no glossary but, for a guide of this length, a glossary does not seem like it would enhance the guide significantly.

The content is relatively accurate. Expanding the content a bit more or explaining that the methods and designs presented are not entirely inclusive would help. As there are different schools of thought regarding what should/should not be included in terms of these designs and methods, simply bringing attention to that and explaining a bit more would help.

Relevance/Longevity rating: 3

This content needs to be updated. Most of the sources cited are seven or more years old. Even more, it would be helpful to see more currently relevant examples. Some of the source authors such as Andy Field provide very interesting and dynamic instruction in general, but they have much more current information available.

The language used is clear and appropriate. Unnecessary jargon is not used. The intent is clear- to communicate simply in a straightforward manner.

The guide seems to be internally consistent in terms of terminology and framework. There do not seem to be issues in this area. Terminology is internally consistent.

For a guide of this length, the author structured this logically into sections. This guide could be adopted in whole or by section with limited modifications. Courses with fewer than seven modules could also logically group some of the sections.

This guide does present with logical organization. The topics presented are conceptually sequenced in a manner that helps learners build logically on prior conceptualization. This also provides a simple conceptual framework for instructors to guide learners through the process.

Interface rating: 4

The visuals themselves are simple, but they are clear and understandable without distracting the learner. The purpose is clear- that of learning rather than visuals for the sake of visuals. Likewise, navigation is clear and without issues beyond a broken link (the last source noted in the references).

This guide seems to be free of grammatical errors.

It would be interesting to see more cultural integration in a guide of this nature, but the guide is not culturally insensitive or offensive in any way. The language used seems to be consistent with APA's guidelines for unbiased language.

Reviewed by Heng Yu-Ku, Professor, University of Northern Colorado on 5/13/21

The text covers all areas and ideas appropriately and provides practical tables, charts, and examples throughout the text. I would suggest the author also provides a complete research proposal at the end of Section 3 (page 10) and a comprehensive... read more

The text covers all areas and ideas appropriately and provides practical tables, charts, and examples throughout the text. I would suggest the author also provides a complete research proposal at the end of Section 3 (page 10) and a comprehensive research study as an Appendix after section 7 (page 26) to help readers comprehend information better.

For the most part, the content is accurate and unbiased. However, the author only includes four types of research designs used on the social sciences that contain quantitative elements: 1. Mixed method, 2) Case study, 3) Quasi-experiment, and 3) Action research. I wonder why the correlational research is not included as another type of quantitative research design as it has been introduced and emphasized in section 6 by the author.

I believe the content is up-to-date and that necessary updates will be relatively easy and straightforward to implement.

The text is easy to read and provides adequate context for any technical terminology used. However, the author could provide more detailed information about estimating the minimum sample size but not just refer the readers to use the online sample calculators at a different website.

The text is internally consistent in terms of terminology and framework. The author provides the right amount of information with additional information or resources for the readers.

The text includes seven sections. Therefore, it is easier for the instructor to allocate or divide the content into different weeks of instruction within the course.

Yes, the topics in the text are presented in a logical and clear fashion. The author provides clear and precise terminologies, summarizes important content in Table or Figure forms, and offers examples in each section for readers to check their understanding.

The interface of the book is consistent and clear, and all the images and charts provided in the book are appropriate. However, I did encounter some navigation problems as a couple of links are not working or requires permission to access those (pages 10 and 27).

No grammatical errors were found.

No culturally incentive or offensive in its language and the examples provided were found.

As the book title stated, this book provides “A Quick Guide to Quantitative Research in Social Science. It offers easy-to-read information and introduces the readers to the research process, such as research questions, research paradigms, research process, research designs, research methods, data collection, data analysis, and data discussion. However, some links are not working or need permissions to access them (pages 10 and 27).

Reviewed by Hsiao-Chin Kuo, Assistant Professor, Northeastern Illinois University on 4/26/21, updated 4/28/21

As a quick guide, it covers basic concepts related to quantitative research. It starts with WHY quantitative research with regard to asking research questions and considering research paradigms, then provides an overview of research design and... read more

As a quick guide, it covers basic concepts related to quantitative research. It starts with WHY quantitative research with regard to asking research questions and considering research paradigms, then provides an overview of research design and process, discusses methods, data collection and analysis, and ends with writing a research report. It also identifies its target readers/users as those begins to explore quantitative research. It would be helpful to include more examples for readers/users who are new to quantitative research.

Its content is mostly accurate and no bias given its nature as a quick guide. Yet, it is also quite simplified, such as its explanations of mixed methods, case study, quasi-experimental research, and action research. It provides resources for extended reading, yet more recent works will be helpful.

The book is relevant given its nature as a quick guide. It would be helpful to provide more recent works in its resources for extended reading, such as the section for Survey Research (p. 12). It would also be helpful to include more information to introduce common tools and software for statistical analysis.

The book is written with clear and understandable language. Important terms and concepts are presented with plain explanations and examples. Figures and tables are also presented to support its clarity. For example, Table 4 (p. 20) gives an easy-to-follow overview of different statistical tests.

The framework is very consistent with key points, further explanations, examples, and resources for extended reading. The sample studies are presented following the layout of the content, such as research questions, design and methods, and analysis. These examples help reinforce readers' understanding of these common research elements.

The book is divided into seven chapters. Each chapter clearly discusses an aspect of quantitative research. It can be easily divided into modules for a class or for a theme in a research method class. Chapters are short and provides additional resources for extended reading.

The topics in the chapters are presented in a logical and clear structure. It is easy to follow to a degree. Though, it would be also helpful to include the chapter number and title in the header next to its page number.

The text is easy to navigate. Most of the figures and tables are displayed clearly. Yet, there are several sections with empty space that is a bit confusing in the beginning. Again, it can be helpful to include the chapter number/title next to its page number.

Grammatical Errors rating: 4

No major grammatical errors were found.

There are no cultural insensitivities noted.

Given the nature and purpose of this book, as a quick guide, it provides readers a quick reference for important concepts and terms related to quantitative research. Because this book is quite short (27 pages), it can be used as an overview/preview about quantitative research. Teacher's facilitation/input and extended readings will be needed for a deeper learning and discussion about aspects of quantitative research.

Reviewed by Yang Cheng, Assistant Professor, North Carolina State University on 1/6/21

It covers the most important topics such as research progress, resources, measurement, and analysis of the data. read more

It covers the most important topics such as research progress, resources, measurement, and analysis of the data.

The book accurately describes the types of research methods such as mixed-method, quasi-experiment, and case study. It talks about the research proposal and key differences between statistical analyses as well.

The book pinpointed the significance of running a quantitative research method and its relevance to the field of social science.

The book clearly tells us the differences between types of quantitative methods and the steps of running quantitative research for students.

The book is consistent in terms of terminologies such as research methods or types of statistical analysis.

It addresses the headlines and subheadlines very well and each subheading should be necessary for readers.

The book was organized very well to illustrate the topic of quantitative methods in the field of social science.

The pictures within the book could be further developed to describe the key concepts vividly.

The textbook contains no grammatical errors.

It is not culturally offensive in any way.

Overall, this is a simple and quick guide for this important topic. It should be valuable for undergraduate students who would like to learn more about research methods.

Reviewed by Pierre Lu, Associate Professor, University of Texas Rio Grande Valley on 11/20/20

As a quick guide to quantitative research in social sciences, the text covers most ideas and areas. read more

As a quick guide to quantitative research in social sciences, the text covers most ideas and areas.

Mostly accurate content.

As a quick guide, content is highly relevant.

Succinct and clear.

Internally, the text is consistent in terms of terminology used.

The text is easily and readily divisible into smaller sections that can be used as assignments.

I like that there are examples throughout the book.

Easy to read. No interface/ navigation problems.

No grammatical errors detected.

I am not aware of the culturally insensitive description. After all, this is a methodology book.

I think the book has potential to be adopted as a foundation for quantitative research courses, or as a review in the first weeks in advanced quantitative course.

Reviewed by Sarah Fischer, Assistant Professor, Marymount University on 7/31/20

It is meant to be an overview, but it incredibly condensed and spends almost no time on key elements of statistics (such as what makes research generalizable, or what leads to research NOT being generalizable). read more

It is meant to be an overview, but it incredibly condensed and spends almost no time on key elements of statistics (such as what makes research generalizable, or what leads to research NOT being generalizable).

Content Accuracy rating: 1

Contains VERY significant errors, such as saying that one can "accept" a hypothesis. (One of the key aspect of hypothesis testing is that one either rejects or fails to reject a hypothesis, but NEVER accepts a hypothesis.)

Very relevant to those experiencing the research process for the first time. However, it is written by someone working in the natural sciences but is a text for social sciences. This does not explain the errors, but does explain why sometimes the author assumes things about the readers ("hail from more subjectivist territory") that are likely not true.

Clarity rating: 3

Some statistical terminology not explained clearly (or accurately), although the author has made attempts to do both.

Very consistently laid out.

Chapters are very short yet also point readers to outside texts for additional information. Easy to follow.

Generally logically organized.

Easy to navigate, images clear. The additional sources included need to linked to.

Minor grammatical and usage errors throughout the text.

Makes efforts to be inclusive.

The idea of this book is strong--short guides like this are needed. However, this book would likely be strengthened by a revision to reduce inaccuracies and improve the definitions and technical explanations of statistical concepts. Since the book is specifically aimed at the social sciences, it would also improve the text to have more examples that are based in the social sciences (rather than the health sciences or the arts).

Reviewed by Michelle Page, Assistant Professor, Worcester State University on 5/30/20

This text is exactly intended to be what it says: A quick guide. A basic outline of quantitative research processes, akin to cliff notes. The content provides only the essentials of a research process and contains key terms. A student or new... read more

This text is exactly intended to be what it says: A quick guide. A basic outline of quantitative research processes, akin to cliff notes. The content provides only the essentials of a research process and contains key terms. A student or new researcher would not be able to use this as a stand alone guide for quantitative pursuits without having a supplemental text that explains the steps in the process more comprehensively. The introduction does provide this caveat.

Content Accuracy rating: 3

There are no biases or errors that could be distinguished; however, it’s simplicity in content, although accurate for an outline of process, may lack a conveyance of the deeper meanings behind the specific processes explained about qualitative research.

The content is outlined in traditional format to highlight quantitative considerations for formatting research foundational pieces. The resources/references used to point the reader to literature sources can be easily updated with future editions.

The jargon in the text is simple to follow and provides adequate context for its purpose. It is simplified for its intention as a guide which is appropriate.

Each section of the text follows a consistent flow. Explanation of the research content or concept is defined and then a connection to literature is provided to expand the readers understanding of the section’s content. Terminology is consistent with the qualitative process.

As an “outline” and guide, this text can be used to quickly identify the critical parts of the quantitative process. Although each section does not provide deeper content for meaningful use as a stand alone text, it’s utility would be excellent as a reference for a course and can be used as an content guide for specific research courses.

The text’s outline and content are aligned and are in a logical flow in terms of the research considerations for quantitative research.

The only issue that the format was not able to provide was linkable articles. These would have to be cut and pasted into a browser. Functional clickable links in a text are very successful at leading the reader to the supplemental material.

No grammatical errors were noted.

This is a very good outline “guide” to help a new or student researcher to demystify the quantitative process. A successful outline of any process helps to guide work in a logical and systematic way. I think this simple guide is a great adjunct to more substantial research context.

Table of Contents

  • Section 1: What will this resource do for you?
  • Section 2: Why are you thinking about numbers? A discussion of the research question and paradigms.
  • Section 3: An overview of the Research Process and Research Designs
  • Section 4: Quantitative Research Methods
  • Section 5: the data obtained from quantitative research
  • Section 6: Analysis of data
  • Section 7: Discussing your Results

Ancillary Material

About the book.

This resource is intended as an easy-to-use guide for anyone who needs some quick and simple advice on quantitative aspects of research in social sciences, covering subjects such as education, sociology, business, nursing. If you area qualitative researcher who needs to venture into the world of numbers, or a student instructed to undertake a quantitative research project despite a hatred for maths, then this booklet should be a real help.

The booklet was amended in 2022 to take into account previous review comments.  

About the Contributors

Christine Davies , Ph.D

Contribute to this Page

Logo for Open Oregon Educational Resources

19 11. Quantitative measurement

Chapter outline.

  • Conceptual definitions (17 minute read)
  • Operational definitions (36 minute read)
  • Measurement quality (21 minute read)
  • Ethical and social justice considerations (15 minute read)

Content warning: examples in this chapter contain references to ethnocentrism, toxic masculinity, racism in science, drug use, mental health and depression, psychiatric inpatient care, poverty and basic needs insecurity, pregnancy, and racism and sexism in the workplace and higher education.

11.1 Conceptual definitions

Learning objectives.

Learners will be able to…

  • Define measurement and conceptualization
  • Apply Kaplan’s three categories to determine the complexity of measuring a given variable
  • Identify the role previous research and theory play in defining concepts
  • Distinguish between unidimensional and multidimensional concepts
  • Critically apply reification to how you conceptualize the key variables in your research project

In social science, when we use the term  measurement , we mean the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. At its core, measurement is about defining one’s terms in as clear and precise a way as possible. Of course, measurement in social science isn’t quite as simple as using a measuring cup or spoon, but there are some basic tenets on which most social scientists agree when it comes to measurement. We’ll explore those, as well as some of the ways that measurement might vary depending on your unique approach to the study of your topic.

An important point here is that measurement does not require any particular instruments or procedures. What it does require is a systematic procedure for assigning scores, meanings, and descriptions to individuals or objects so that those scores represent the characteristic of interest. You can measure phenomena in many different ways, but you must be sure that how you choose to measure gives you information and data that lets you answer your research question. If you’re looking for information about a person’s income, but your main points of measurement have to do with the money they have in the bank, you’re not really going to find the information you’re looking for!

The question of what social scientists measure can be answered by asking yourself what social scientists study. Think about the topics you’ve learned about in other social work classes you’ve taken or the topics you’ve considered investigating yourself. Let’s consider Melissa Milkie and Catharine Warner’s study (2011) [1] of first graders’ mental health. In order to conduct that study, Milkie and Warner needed to have some idea about how they were going to measure mental health. What does mental health mean, exactly? And how do we know when we’re observing someone whose mental health is good and when we see someone whose mental health is compromised? Understanding how measurement works in research methods helps us answer these sorts of questions.

As you might have guessed, social scientists will measure just about anything that they have an interest in investigating. For example, those who are interested in learning something about the correlation between social class and levels of happiness must develop some way to measure both social class and happiness. Those who wish to understand how well immigrants cope in their new locations must measure immigrant status and coping. Those who wish to understand how a person’s gender shapes their workplace experiences must measure gender and workplace experiences (and get more specific about which experiences are under examination). You get the idea. Social scientists can and do measure just about anything you can imagine observing or wanting to study. Of course, some things are easier to observe or measure than others.

what is quantitative research in social work

Observing your variables

In 1964, philosopher Abraham Kaplan (1964) [2] wrote The   Conduct of Inquiry,  which has since become a classic work in research methodology (Babbie, 2010). [3] In his text, Kaplan describes different categories of things that behavioral scientists observe. One of those categories, which Kaplan called “observational terms,” is probably the simplest to measure in social science. Observational terms are the sorts of things that we can see with the naked eye simply by looking at them. Kaplan roughly defines them as conditions that are easy to identify and verify through direct observation. If, for example, we wanted to know how the conditions of playgrounds differ across different neighborhoods, we could directly observe the variety, amount, and condition of equipment at various playgrounds.

Indirect observables , on the other hand, are less straightforward to assess. In Kaplan’s framework, they are conditions that are subtle and complex that we must use existing knowledge and intuition to define. If we conducted a study for which we wished to know a person’s income, we’d probably have to ask them their income, perhaps in an interview or a survey. Thus, we have observed income, even if it has only been observed indirectly. Birthplace might be another indirect observable. We can ask study participants where they were born, but chances are good we won’t have directly observed any of those people being born in the locations they report.

Sometimes the measures that we are interested in are more complex and more abstract than observational terms or indirect observables. Think about some of the concepts you’ve learned about in other social work classes—for example, ethnocentrism. What is ethnocentrism? Well, from completing an introduction to social work class you might know that it has something to do with the way a person judges another’s culture. But how would you  measure  it? Here’s another construct: bureaucracy. We know this term has something to do with organizations and how they operate but measuring such a construct is trickier than measuring something like a person’s income. The theoretical concepts of ethnocentrism and bureaucracy represent ideas whose meanings we have come to agree on. Though we may not be able to observe these abstractions directly, we can observe their components.

Kaplan referred to these more abstract things that behavioral scientists measure as constructs.  Constructs  are “not observational either directly or indirectly” (Kaplan, 1964, p. 55), [4] but they can be defined based on observables. For example, the construct of bureaucracy could be measured by counting the number of supervisors that need to approve routine spending by public administrators. The greater the number of administrators that must sign off on routine matters, the greater the degree of bureaucracy. Similarly, we might be able to ask a person the degree to which they trust people from different cultures around the world and then assess the ethnocentrism inherent in their answers. We can measure constructs like bureaucracy and ethnocentrism by defining them in terms of what we can observe. [5]

The idea of coming up with your own measurement tool might sound pretty intimidating at this point. The good news is that if you find something in the literature that works for you, you can use it (with proper attribution, of course). If there are only pieces of it that you like, you can reuse those pieces (with proper attribution and describing/justifying any changes). You don’t always have to start from scratch!

Look at the variables in your research question.

  • Classify them as direct observables, indirect observables, or constructs.
  • Do you think measuring them will be easy or hard?
  • What are your first thoughts about how to measure each variable? No wrong answers here, just write down a thought about each variable.

what is quantitative research in social work

Measurement starts with conceptualization

In order to measure the concepts in your research question, we first have to understand what we think about them. As an aside, the word concept  has come up quite a bit, and it is important to be sure we have a shared understanding of that term. A  concept is the notion or image that we conjure up when we think of some cluster of related observations or ideas. For example, masculinity is a concept. What do you think of when you hear that word? Presumably, you imagine some set of behaviors and perhaps even a particular style of self-presentation. Of course, we can’t necessarily assume that everyone conjures up the same set of ideas or images when they hear the word  masculinity . While there are many possible ways to define the term and some may be more common or have more support than others, there is no universal definition of masculinity. What counts as masculine may shift over time, from culture to culture, and even from individual to individual (Kimmel, 2008). This is why defining our concepts is so important.\

Not all researchers clearly explain their theoretical or conceptual framework for their study, but they should! Without understanding how a researcher has defined their key concepts, it would be nearly impossible to understand the meaning of that researcher’s findings and conclusions. Back in Chapter 7 , you developed a theoretical framework for your study based on a survey of the theoretical literature in your topic area. If you haven’t done that yet, consider flipping back to that section to familiarize yourself with some of the techniques for finding and using theories relevant to your research question. Continuing with our example on masculinity, we would need to survey the literature on theories of masculinity. After a few queries on masculinity, I found a wonderful article by Wong (2010) [6] that analyzed eight years of the journal Psychology of Men & Masculinity and analyzed how often different theories of masculinity were used . Not only can I get a sense of which theories are more accepted and which are more marginal in the social science on masculinity, I am able to identify a range of options from which I can find the theory or theories that will inform my project. 

Identify a specific theory (or more than one theory) and how it helps you understand…

  • Your independent variable(s).
  • Your dependent variable(s).
  • The relationship between your independent and dependent variables.

Rather than completing this exercise from scratch, build from your theoretical or conceptual framework developed in previous chapters.

In quantitative methods, conceptualization involves writing out clear, concise definitions for our key concepts. These are the kind of definitions you are used to, like the ones in a dictionary. A conceptual definition involves defining a concept in terms of other concepts, usually by making reference to how other social scientists and theorists have defined those concepts in the past. Of course, new conceptual definitions are created all the time because our conceptual understanding of the world is always evolving.

Conceptualization is deceptively challenging—spelling out exactly what the concepts in your research question mean to you. Following along with our example, think about what comes to mind when you read the term masculinity. How do you know masculinity when you see it? Does it have something to do with men or with social norms? If so, perhaps we could define masculinity as the social norms that men are expected to follow. That seems like a reasonable start, and at this early stage of conceptualization, brainstorming about the images conjured up by concepts and playing around with possible definitions is appropriate. However, this is just the first step. At this point, you should be beyond brainstorming for your key variables because you have read a good amount of research about them

In addition, we should consult previous research and theory to understand the definitions that other scholars have already given for the concepts we are interested in. This doesn’t mean we must use their definitions, but understanding how concepts have been defined in the past will help us to compare our conceptualizations with how other scholars define and relate concepts. Understanding prior definitions of our key concepts will also help us decide whether we plan to challenge those conceptualizations or rely on them for our own work. Finally, working on conceptualization is likely to help in the process of refining your research question to one that is specific and clear in what it asks. Conceptualization and operationalization (next section) are where “the rubber meets the road,” so to speak, and you have to specify what you mean by the question you are asking. As your conceptualization deepens, you will often find that your research question becomes more specific and clear.

If we turn to the literature on masculinity, we will surely come across work by Michael Kimmel , one of the preeminent masculinity scholars in the United States. After consulting Kimmel’s prior work (2000; 2008), [7] we might tweak our initial definition of masculinity. Rather than defining masculinity as “the social norms that men are expected to follow,” perhaps instead we’ll define it as “the social roles, behaviors, and meanings prescribed for men in any given society at any one time” (Kimmel & Aronson, 2004, p. 503). [8] Our revised definition is more precise and complex because it goes beyond addressing one aspect of men’s lives (norms), and addresses three aspects: roles, behaviors, and meanings. It also implies that roles, behaviors, and meanings may vary across societies and over time. Using definitions developed by theorists and scholars is a good idea, though you may find that you want to define things your own way.

As you can see, conceptualization isn’t as simple as applying any random definition that we come up with to a term. Defining our terms may involve some brainstorming at the very beginning. But conceptualization must go beyond that, to engage with or critique existing definitions and conceptualizations in the literature. Once we’ve brainstormed about the images associated with a particular word, we should also consult prior work to understand how others define the term in question. After we’ve identified a clear definition that we’re happy with, we should make sure that every term used in our definition will make sense to others. Are there terms used within our definition that also need to be defined? If so, our conceptualization is not yet complete. Our definition includes the concept of “social roles,” so we should have a definition for what those mean and become familiar with role theory to help us with our conceptualization. If we don’t know what roles are, how can we study them?

Let’s say we do all of that. We have a clear definition of the term masculinity with reference to previous literature and we also have a good understanding of the terms in our conceptual definition…then we’re done, right? Not so fast. You’ve likely met more than one man in your life, and you’ve probably noticed that they are not the same, even if they live in the same society during the same historical time period. This could mean there are dimensions of masculinity. In terms of social scientific measurement, concepts can be said to have multiple dimensions  when there are multiple elements that make up a single concept. With respect to the term  masculinity , dimensions could based on gender identity, gender performance, sexual orientation, etc.. In any of these cases, the concept of masculinity would be considered to have multiple dimensions.

While you do not need to spell out every possible dimension of the concepts you wish to measure, it is important to identify whether your concepts are unidimensional (and therefore relatively easy to define and measure) or multidimensional (and therefore require multi-part definitions and measures). In this way, how you conceptualize your variables determines how you will measure them in your study. Unidimensional concepts are those that are expected to have a single underlying dimension. These concepts can be measured using a single measure or test. Examples include simple concepts such as a person’s weight, time spent sleeping, and so forth. 

One frustrating this is that there is no clear demarcation between concepts that are inherently unidimensional or multidimensional. Even something as simple as age could be broken down into multiple dimensions including mental age and chronological age, so where does conceptualization stop? How far down the dimensional rabbit hole do we have to go? Researchers should consider two things. First, how important is this variable in your study? If age is not important in your study (maybe it is a control variable), it seems like a waste of time to do a lot of work drawing from developmental theory to conceptualize this variable. A unidimensional measure from zero to dead is all the detail we need. On the other hand, if we were measuring the impact of age on masculinity, conceptualizing our independent variable (age) as multidimensional may provide a richer understanding of its impact on masculinity. Finally, your conceptualization will lead directly to your operationalization of the variable, and once your operationalization is complete, make sure someone reading your study could follow how your conceptual definitions informed the measures you chose for your variables. 

Write a conceptual definition for your independent and dependent variables.

  • Cite and attribute definitions to other scholars, if you use their words.
  • Describe how your definitions are informed by your theoretical framework.
  • Place your definition in conversation with other theories and conceptual definitions commonly used in the literature.
  • Are there multiple dimensions of your variables?
  • Are any of these dimensions important for you to measure?

what is quantitative research in social work

Do researchers actually know what we’re talking about?

Conceptualization proceeds differently in qualitative research compared to quantitative research. Since qualitative researchers are interested in the understandings and experiences of their participants, it is less important for them to find one fixed definition for a concept before starting to interview or interact with participants. The researcher’s job is to accurately and completely represent how their participants understand a concept, not to test their own definition of that concept.

If you were conducting qualitative research on masculinity, you would likely consult previous literature like Kimmel’s work mentioned above. From your literature review, you may come up with a  working definition  for the terms you plan to use in your study, which can change over the course of the investigation. However, the definition that matters is the definition that your participants share during data collection. A working definition is merely a place to start, and researchers should take care not to think it is the only or best definition out there.

In qualitative inquiry, your participants are the experts (sound familiar, social workers?) on the concepts that arise during the research study. Your job as the researcher is to accurately and reliably collect and interpret their understanding of the concepts they describe while answering your questions. Conceptualization of concepts is likely to change over the course of qualitative inquiry, as you learn more information from your participants. Indeed, getting participants to comment on, extend, or challenge the definitions and understandings of other participants is a hallmark of qualitative research. This is the opposite of quantitative research, in which definitions must be completely set in stone before the inquiry can begin.

The contrast between qualitative and quantitative conceptualization is instructive for understanding how quantitative methods (and positivist research in general) privilege the knowledge of the researcher over the knowledge of study participants and community members. Positivism holds that the researcher is the “expert,” and can define concepts based on their expert knowledge of the scientific literature. This knowledge is in contrast to the lived experience that participants possess from experiencing the topic under examination day-in, day-out. For this reason, it would be wise to remind ourselves not to take our definitions too seriously and be critical about the limitations of our knowledge.

Conceptualization must be open to revisions, even radical revisions, as scientific knowledge progresses. While I’ve suggested consulting prior scholarly definitions of our concepts, you should not assume that prior, scholarly definitions are more real than the definitions we create. Likewise, we should not think that our own made-up definitions are any more real than any other definition. It would also be wrong to assume that just because definitions exist for some concept that the concept itself exists beyond some abstract idea in our heads. Building on the paradigmatic ideas behind interpretivism and the critical paradigm, researchers call the assumption that our abstract concepts exist in some concrete, tangible way is known as reification . It explores the power dynamics behind how we can create reality by how we define it.

Returning again to our example of masculinity. Think about our how our notions of masculinity have developed over the past few decades, and how different and yet so similar they are to patriarchal definitions throughout history. Conceptual definitions become more or less popular based on the power arrangements inside of social science the broader world. Western knowledge systems are privileged, while others are viewed as unscientific and marginal. The historical domination of social science by white men from WEIRD countries meant that definitions of masculinity were imbued their cultural biases and were designed explicitly and implicitly to preserve their power. This has inspired movements for cognitive justice as we seek to use social science to achieve global development.

Key Takeaways

  • Measurement is the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating.
  • Kaplan identified three categories of things that social scientists measure including observational terms, indirect observables, and constructs.
  • Some concepts have multiple elements or dimensions.
  • Researchers often use measures previously developed and studied by other researchers.
  • Conceptualization is a process that involves coming up with clear, concise definitions.
  • Conceptual definitions are based on the theoretical framework you are using for your study (and the paradigmatic assumptions underlying those theories).
  • Whether your conceptual definitions come from your own ideas or the literature, you should be able to situate them in terms of other commonly used conceptual definitions.
  • Researchers should acknowledge the limited explanatory power of their definitions for concepts and how oppression can shape what explanations are considered true or scientific.

Think historically about the variables in your research question.

  • How has our conceptual definition of your topic changed over time?
  • What scholars or social forces were responsible for this change?

Take a critical look at your conceptual definitions.

  • How participants might define terms for themselves differently, in terms of their daily experience?
  • On what cultural assumptions are your conceptual definitions based?
  • Are your conceptual definitions applicable across all cultures that will be represented in your sample?

11.2 Operational definitions

  • Define and give an example of indicators and attributes for a variable
  • Apply the three components of an operational definition to a variable
  • Distinguish between levels of measurement for a variable and how those differences relate to measurement
  • Describe the purpose of composite measures like scales and indices

Conceptual definitions are like dictionary definitions. They tell you what a concept means by defining it using other concepts. In this section we will move from the abstract realm (theory) to the real world (measurement). Operationalization is the process by which researchers spell out precisely how a concept will be measured in their study. It involves identifying the specific research procedures we will use to gather data about our concepts. If conceptually defining your terms means looking at theory, how do you operationally define your terms? By looking for indicators of when your variable is present or not, more or less intense, and so forth. Operationalization is probably the most challenging part of quantitative research, but once it’s done, the design and implementation of your study will be straightforward.

what is quantitative research in social work

Operationalization works by identifying specific  indicators that will be taken to represent the ideas we are interested in studying. If we are interested in studying masculinity, then the indicators for that concept might include some of the social roles prescribed to men in society such as breadwinning or fatherhood. Being a breadwinner or a father might therefore be considered indicators  of a person’s masculinity. The extent to which a man fulfills either, or both, of these roles might be understood as clues (or indicators) about the extent to which he is viewed as masculine.

Let’s look at another example of indicators. Each day, Gallup researchers poll 1,000 randomly selected Americans to ask them about their well-being. To measure well-being, Gallup asks these people to respond to questions covering six broad areas: physical health, emotional health, work environment, life evaluation, healthy behaviors, and access to basic necessities. Gallup uses these six factors as indicators of the concept that they are really interested in, which is well-being .

Identifying indicators can be even simpler than the examples described thus far. Political party affiliation is another relatively easy concept for which to identify indicators. If you asked a person what party they voted for in the last national election (or gained access to their voting records), you would get a good indication of their party affiliation. Of course, some voters split tickets between multiple parties when they vote and others swing from party to party each election, so our indicator is not perfect. Indeed, if our study were about political identity as a key concept, operationalizing it solely in terms of who they voted for in the previous election leaves out a lot of information about identity that is relevant to that concept. Nevertheless, it’s a pretty good indicator of political party affiliation.

Choosing indicators is not an arbitrary process. As described earlier, utilizing prior theoretical and empirical work in your area of interest is a great way to identify indicators in a scholarly manner. And you conceptual definitions will point you in the direction of relevant indicators. Empirical work will give you some very specific examples of how the important concepts in an area have been measured in the past and what sorts of indicators have been used. Often, it makes sense to use the same indicators as previous researchers; however, you may find that some previous measures have potential weaknesses that your own study will improve upon.

All of the examples in this chapter have dealt with questions you might ask a research participant on a survey or in a quantitative interview. If you plan to collect data from other sources, such as through direct observation or the analysis of available records, think practically about what the design of your study might look like and how you can collect data on various indicators feasibly. If your study asks about whether the participant regularly changes the oil in their car, you will likely not observe them directly doing so. Instead, you will likely need to rely on a survey question that asks them the frequency with which they change their oil or ask to see their car maintenance records.

  • What indicators are commonly used to measure the variables in your research question?
  • How can you feasibly collect data on these indicators?
  • Are you planning to collect your own data using a questionnaire or interview? Or are you planning to analyze available data like client files or raw data shared from another researcher’s project?

Remember, you need raw data . You research project cannot rely solely on the results reported by other researchers or the arguments you read in the literature. A literature review is only the first part of a research project, and your review of the literature should inform the indicators you end up choosing when you measure the variables in your research question.

Unlike conceptual definitions which contain other concepts, operational definition consists of the following components: (1) the variable being measured and its attributes, (2) the measure you will use, (3) how you plan to interpret the data collected from that measure to draw conclusions about the variable you are measuring.

Step 1: Specifying variables and attributes

The first component, the variable, should be the easiest part. At this point in quantitative research, you should have a research question that has at least one independent and at least one dependent variable. Remember that variables must be able to vary. For example, the United States is not a variable. Country of residence is a variable, as is patriotism. Similarly, if your sample only includes men, gender is a constant in your study, not a variable. A  constant is a characteristic that does not change in your study.

When social scientists measure concepts, they sometimes use the language of variables and attributes. A  variable refers to a quality or quantity that varies across people or situations. Attributes  are the characteristics that make up a variable. For example, the variable hair color would contain attributes like blonde, brown, black, red, gray, etc. A variable’s attributes determine its level of measurement. There are four possible levels of measurement: nominal, ordinal, interval, and ratio. The first two levels of measurement are  categorical , meaning their attributes are categories rather than numbers. The latter two levels of measurement are  continuous , meaning their attributes are numbers.

what is quantitative research in social work

Levels of measurement

Hair color is an example of a nominal level of measurement.  Nominal measures are categorical, and those categories cannot be mathematically ranked. As a brown-haired person (with some gray), I can’t say for sure that brown-haired people are better than blonde-haired people. As with all nominal levels of measurement, there is no ranking order between hair colors; they are simply different. That is what constitutes a nominal level of gender and race are also measured at the nominal level.

What attributes are contained in the variable  hair color ? While blonde, brown, black, and red are common colors, some people may not fit into these categories if we only list these attributes. My wife, who currently has purple hair, wouldn’t fit anywhere. This means that our attributes were not exhaustive. Exhaustiveness  means that all possible attributes are listed. We may have to list a lot of colors before we can meet the criteria of exhaustiveness. Clearly, there is a point at which exhaustiveness has been reasonably met. If a person insists that their hair color is  light burnt sienna , it is not your responsibility to list that as an option. Rather, that person would reasonably be described as brown-haired. Perhaps listing a category for  other color  would suffice to make our list of colors exhaustive.

What about a person who has multiple hair colors at the same time, such as red and black? They would fall into multiple attributes. This violates the rule of  mutual exclusivity , in which a person cannot fall into two different attributes. Instead of listing all of the possible combinations of colors, perhaps you might include a  multi-color  attribute to describe people with more than one hair color.

Making sure researchers provide mutually exclusive and exhaustive is about making sure all people are represented in the data record. For many years, the attributes for gender were only male or female. Now, our understanding of gender has evolved to encompass more attributes that better reflect the diversity in the world. Children of parents from different races were often classified as one race or another, even if they identified with both cultures. The option for bi-racial or multi-racial on a survey not only more accurately reflects the racial diversity in the real world but validates and acknowledges people who identify in that manner. If we did not measure race in this way, we would leave empty the data record for people who identify as biracial or multiracial, impairing our search for truth.

Unlike nominal-level measures, attributes at the  ordinal  level can be rank ordered. For example, someone’s degree of satisfaction in their romantic relationship can be ordered by rank. That is, you could say you are not at all satisfied, a little satisfied, moderately satisfied, or highly satisfied. Note that even though these have a rank order to them (not at all satisfied is certainly worse than highly satisfied), we cannot calculate a mathematical distance between those attributes. We can simply say that one attribute of an ordinal-level variable is more or less than another attribute.

This can get a little confusing when using rating scales . If you have ever taken a customer satisfaction survey or completed a course evaluation for school, you are familiar with rating scales. “On a scale of 1-5, with 1 being the lowest and 5 being the highest, how likely are you to recommend our company to other people?” That surely sounds familiar. Rating scales use numbers, but only as a shorthand, to indicate what attribute (highly likely, somewhat likely, etc.) the person feels describes them best. You wouldn’t say you are “2” likely to recommend the company, but you would say you are not very likely to recommend the company. Ordinal-level attributes must also be exhaustive and mutually exclusive, as with nominal-level variables.

At the  interval   level, attributes must also be exhaustive and mutually exclusive and there is equal distance between attributes. Interval measures are also continuous, meaning their attributes are numbers, rather than categories. IQ scores are interval level, as are temperatures in Fahrenheit and Celsius. Their defining characteristic is that we can say how much more or less one attribute differs from another. We cannot, however, say with certainty what the ratio of one attribute is in comparison to another. For example, it would not make sense to say that a person with an IQ score of 140 has twice the IQ of a person with a score of 70. However, the difference between IQ scores of 80 and 100 is the same as the difference between IQ scores of 120 and 140.

While we cannot say that someone with an IQ of 140 is twice as intelligent as someone with an IQ of 70 because IQ is measured at the interval level, we can say that someone with six siblings has twice as many as someone with three because number of siblings is measured at the ratio level. Finally, at the ratio   level, attributes are mutually exclusive and exhaustive, attributes can be rank ordered, the distance between attributes is equal, and attributes have a true zero point. Thus, with these variables, we can  say what the ratio of one attribute is in comparison to another. Examples of ratio-level variables include age and years of education. We know that a person who is 12 years old is twice as old as someone who is 6 years old. Height measured in meters and weight measured in kilograms are good examples. So are counts of discrete objects or events such as the number of siblings one has or the number of questions a student answers correctly on an exam. The differences between each level of measurement are visualized in Table 11.1.

Table 11.1 Criteria for Different Levels of Measurement
Nominal Ordinal Interval Ratio
Exhaustive X X X X
Mutually exclusive X X X X
Rank-ordered X X X
Equal distance between attributes X X
True zero point X

Levels of measurement=levels of specificity

We have spent time learning how to determine our data’s level of measurement. Now what? How could we use this information to help us as we measure concepts and develop measurement tools? First, the types of statistical tests that we are able to use are dependent on our data’s level of measurement. With nominal-level measurement, for example, the only available measure of central tendency is the mode. With ordinal-level measurement, the median or mode can be used as indicators of central tendency. Interval and ratio-level measurement are typically considered the most desirable because they permit for any indicators of central tendency to be computed (i.e., mean, median, or mode). Also, ratio-level measurement is the only level that allows meaningful statements about ratios of scores. The higher the level of measurement, the more complex statistical tests we are able to conduct. This knowledge may help us decide what kind of data we need to gather, and how.

That said, we have to balance this knowledge with the understanding that sometimes, collecting data at a higher level of measurement could negatively impact our studies. For instance, sometimes providing answers in ranges may make prospective participants feel more comfortable responding to sensitive items. Imagine that you were interested in collecting information on topics such as income, number of sexual partners, number of times someone used illicit drugs, etc. You would have to think about the sensitivity of these items and determine if it would make more sense to collect some data at a lower level of measurement (e.g., asking if they are sexually active or not (nominal) versus their total number of sexual partners (ratio).

Finally, sometimes when analyzing data, researchers find a need to change a data’s level of measurement. For example, a few years ago, a student was interested in studying the relationship between mental health and life satisfaction. This student used a variety of measures. One item asked about the number of mental health symptoms, reported as the actual number. When analyzing data, my student examined the mental health symptom variable and noticed that she had two groups, those with none or one symptoms and those with many symptoms. Instead of using the ratio level data (actual number of mental health symptoms), she collapsed her cases into two categories, few and many. She decided to use this variable in her analyses. It is important to note that you can move a higher level of data to a lower level of data; however, you are unable to move a lower level to a higher level.

  • Check that the variables in your research question can vary…and that they are not constants or one of many potential attributes of a variable.
  • Think about the attributes your variables have. Are they categorical or continuous? What level of measurement seems most appropriate?

what is quantitative research in social work

Step 2: Specifying measures for each variable

Let’s pick a social work research question and walk through the process of operationalizing variables to see how specific we need to get. I’m going to hypothesize that residents of a psychiatric unit who are more depressed are less likely to be satisfied with care. Remember, this would be a inverse relationship—as depression increases, satisfaction decreases. In this question, depression is my independent variable (the cause) and satisfaction with care is my dependent variable (the effect). Now we have identified our variables, their attributes, and levels of measurement, we move onto the second component: the measure itself.

So, how would you measure my key variables: depression and satisfaction? What indicators would you look for? Some students might say that depression could be measured by observing a participant’s body language. They may also say that a depressed person will often express feelings of sadness or hopelessness. In addition, a satisfied person might be happy around service providers and often express gratitude. While these factors may indicate that the variables are present, they lack coherence. Unfortunately, what this “measure” is actually saying is that “I know depression and satisfaction when I see them.” While you are likely a decent judge of depression and satisfaction, you need to provide more information in a research study for how you plan to measure your variables. Your judgment is subjective, based on your own idiosyncratic experiences with depression and satisfaction. They couldn’t be replicated by another researcher. They also can’t be done consistently for a large group of people. Operationalization requires that you come up with a specific and rigorous measure for seeing who is depressed or satisfied.

Finding a good measure for your variable depends on the kind of variable it is. Variables that are directly observable don’t come up very often in my students’ classroom projects, but they might include things like taking someone’s blood pressure, marking attendance or participation in a group, and so forth. To measure an indirectly observable variable like age, you would probably put a question on a survey that asked, “How old are you?” Measuring a variable like income might require some more thought, though. Are you interested in this person’s individual income or the income of their family unit? This might matter if your participant does not work or is dependent on other family members for income. Do you count income from social welfare programs? Are you interested in their income per month or per year? Even though indirect observables are relatively easy to measure, the measures you use must be clear in what they are asking, and operationalization is all about figuring out the specifics of what you want to know. For more complicated constructs, you will need compound measures (that use multiple indicators to measure a single variable).

How you plan to collect your data also influences how you will measure your variables. For social work researchers using secondary data like client records as a data source, you are limited by what information is in the data sources you can access. If your organization uses a given measurement for a mental health outcome, that is the one you will use in your study. Similarly, if you plan to study how long a client was housed after an intervention using client visit records, you are limited by how their caseworker recorded their housing status in the chart. One of the benefits of collecting your own data is being able to select the measures you feel best exemplify your understanding of the topic.

Measuring unidimensional concepts

The previous section mentioned two important considerations: how complicated the variable is and how you plan to collect your data. With these in hand, we can use the level of measurement to further specify how you will measure your variables and consider specialized rating scales developed by social science researchers.

Measurement at each level

Nominal measures assess categorical variables. These measures are used for variables or indicators that have mutually exclusive attributes, but that cannot be rank-ordered. Nominal measures ask about the variable and provide names or labels for different attribute values like social work, counseling, and nursing for the variable profession. Nominal measures are relatively straightforward.

Ordinal measures often use a rating scale. It is an ordered set of responses that participants must choose from. Figure 11.1 shows several examples. The number of response options on a typical rating scale is usualy five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [9] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 11.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

what is quantitative research in social work

Interval measures are those where the values measured are not only rank-ordered, but are also equidistant from adjacent attributes. For example, the temperature scale (in Fahrenheit or Celsius), where the difference between 30 and 40 degree Fahrenheit is the same as that between 80 and 90 degree Fahrenheit. Likewise, if you have a scale that asks respondents’ annual income using the following attributes (ranges): $0 to 10,000, $10,000 to 20,000, $20,000 to 30,000, and so forth, this is also an interval measure, because the mid-point of each range (i.e., $5,000, $15,000, $25,000, etc.) are equidistant from each other. The intelligence quotient (IQ) scale is also an interval measure, because the measure is designed such that the difference between IQ scores 100 and 110 is supposed to be the same as between 110 and 120 (although we do not really know whether that is truly the case). Interval measures allow us to examine “how much more” is one attribute when compared to another, which is not possible with nominal or ordinal measures. You may find researchers who “pretend” (incorrectly) that ordinal rating scales are actually interval measures so that we can use different statistical techniques for analyzing them. As we will discuss in the latter part of the chapter, this is a mistake because there is no way to know whether the difference between a 3 and a 4 on a rating scale is the same as the difference between a 2 and a 3. Those numbers are just placeholders for categories.

Ratio measures are those that have all the qualities of nominal, ordinal, and interval scales, and in addition, also have a “true zero” point (where the value zero implies lack or non-availability of the underlying construct). Think about how to measure the number of people working in human resources at a social work agency. It could be one, several, or none (if the company contracts out for those services). Measuring interval and ratio data is relatively easy, as people either select or input a number for their answer. If you ask a person how many eggs they purchased last week, they can simply tell you they purchased `a dozen eggs at the store, two at breakfast on Wednesday, or none at all.

Commonly used rating scales in questionnaires

The level of measurement will give you the basic information you need, but social scientists have developed specialized instruments for use in questionnaires, a common tool used in quantitative research. As we mentioned before, if you plan to source your data from client files or previously published results

Although Likert scale is a term colloquially used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning. In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932) . [10]  It involves presenting people with several statements—including both favorable and unfavorable statements—about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale:  Strongly Agree ,  Agree ,  Neither Agree nor Disagree ,  Disagree ,  Strongly Disagree . Numbers are assigned to each response a nd then summed across all items to produce a score representing the attitude toward the person, group, or idea. For items that are phrased in an opposite direction (e.g., negatively worded statements instead of positively worded statements), reverse coding is used so that the numerical scoring of statements also runs in the opposite direction.  The entire set of items came to be called a Likert scale, as indicated in Table 11.2 below.

Unless you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a rating scale. Likert scales allow for more granularity (more finely tuned response) than yes/no items, including whether respondents are neutral to the statement. Below is an example of how we might use a Likert scale to assess your attitudes about research as you work your way through this textbook.

Table 11.2 Likert scale
I like research more now than when I started reading this book.
This textbook is easy to use.
I feel confident about how well I understand levels of measurement.
This textbook is helping me plan my research proposal.

Semantic differential scales are composite (multi-item) scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites. Whereas in the above Likert scale, the participant is asked how much they agree or disagree with a statement, in a semantic differential scale the participant is asked to indicate how they feel about a specific item. This makes the s emantic differential scale an excellent technique for measuring people’s attitudes or feelings toward objects, events, or behaviors. Table 11.3 is an example of a semantic differential scale that was created to assess participants’ feelings about this textbook. 

Very much Somewhat Neither Somewhat Very much
Boring Exciting
Useless Useful
Hard Easy
Irrelevant Applicable

This composite scale was designed by Louis Guttman and uses a series of items arranged in increasing order of intensity (least intense to most intense) of the concept. This type of scale allows us to understand the intensity of beliefs or feelings. Each item in the above Guttman scale has a weight (this is not indicated on the tool) which varies with the intensity of that item, and the weighted combination of each response is used as an aggregate measure of an observation.

Example Guttman Scale Items

  • I often felt the material was not engaging                               Yes/No
  • I was often thinking about other things in class                     Yes/No
  • I was often working on other tasks during class                     Yes/No
  • I will work to abolish research from the curriculum              Yes/No

Notice how the items move from lower intensity to higher intensity. A researcher reviews the yes answers and creates a score for each participant.

Composite measures: Scales and indices

Depending on your research design, your measure may be something you put on a survey or pre/post-test that you give to your participants. For a variable like age or income, one well-worded question may suffice. Unfortunately, most variables in the social world are not so simple. Depression and satisfaction are multidimensional concepts. Relying on a single indicator like a question that asks “Yes or no, are you depressed?” does not encompass the complexity of depression, including issues with mood, sleeping, eating, relationships, and happiness. There is no easy way to delineate between multidimensional and unidimensional concepts, as its all in how you think about your variable. Satisfaction could be validly measured using a unidimensional ordinal rating scale. However, if satisfaction were a key variable in our study, we would need a theoretical framework and conceptual definition for it. That means we’d probably have more indicators to ask about like timeliness, respect, sensitivity, and many others, and we would want our study to say something about what satisfaction truly means in terms of our other key variables. However, if satisfaction is not a key variable in your conceptual framework, it makes sense to operationalize it as a unidimensional concept.

For more complicated measures, researchers use scales and indices (sometimes called indexes) to measure their variables because they assess multiple indicators to develop a composite (or total) score. Co mposite scores provide a much greater understanding of concepts than a single item could. Although we won’t delve too deeply into the process of scale development, we will cover some important topics for you to understand how scales and indices developed by other researchers can be used in your project.

Although they exhibit differences (which will later be discussed) the two have in common various factors.

  • Both are ordinal measures of variables.
  • Both can order the units of analysis in terms of specific variables.
  • Both are composite measures .

what is quantitative research in social work

The previous section discussed how to measure respondents’ responses to predesigned items or indicators belonging to an underlying construct. But how do we create the indicators themselves? The process of creating the indicators is called scaling. More formally, scaling is a branch of measurement that involves the construction of measures by associating qualitative judgments about unobservable constructs with quantitative, measurable metric units. Stevens (1946) [11] said, “Scaling is the assignment of objects to numbers according to a rule.” This process of measuring abstract concepts in concrete terms remains one of the most difficult tasks in empirical social science research.

The outcome of a scaling process is a scale , which is an empirical structure for measuring items or indicators of a given construct. Understand that multidimensional “scales”, as discussed in this section, are a little different from “rating scales” discussed in the previous section. A rating scale is used to capture the respondents’ reactions to a given item on a questionnaire. For example, an ordinally scaled item captures a value between “strongly disagree” to “strongly agree.” Attaching a rating scale to a statement or instrument is not scaling. Rather, scaling is the formal process of developing scale items, before rating scales can be attached to those items.

If creating your own scale sounds painful, don’t worry! For most multidimensional variables, you would likely be duplicating work that has already been done by other researchers. Specifically, this is a branch of science called psychometrics. You do not need to create a scale for depression because scales such as the Patient Health Questionnaire (PHQ-9), the Center for Epidemiologic Studies Depression Scale (CES-D), and Beck’s Depression Inventory (BDI) have been developed and refined over dozens of years to measure variables like depression. Similarly, scales such as the Patient Satisfaction Questionnaire (PSQ-18) have been developed to measure satisfaction with medical care. As we will discuss in the next section, these scales have been shown to be reliable and valid. While you could create a new scale to measure depression or satisfaction, a study with rigor would pilot test and refine that new scale over time to make sure it measures the concept accurately and consistently. This high level of rigor is often unachievable in student research projects because of the cost and time involved in pilot testing and validating, so using existing scales is recommended.

Unfortunately, there is no good one-stop=shop for psychometric scales. The Mental Measurements Yearbook provides a searchable database of measures for social science variables, though it woefully incomplete and often does not contain the full documentation for scales in its database. You can access it from a university library’s list of databases. If you can’t find anything in there, your next stop should be the methods section of the articles in your literature review. The methods section of each article will detail how the researchers measured their variables, and often the results section is instructive for understanding more about measures. In a quantitative study, researchers may have used a scale to measure key variables and will provide a brief description of that scale, its names, and maybe a few example questions. If you need more information, look at the results section and tables discussing the scale to get a better idea of how the measure works. Looking beyond the articles in your literature review, searching Google Scholar using queries like “depression scale” or “satisfaction scale” should also provide some relevant results. For example, searching for documentation for the Rosenberg Self-Esteem Scale (which we will discuss in the next section), I found this report from researchers investigating acceptance and commitment therapy which details this scale and many others used to assess mental health outcomes. If you find the name of the scale somewhere but cannot find the documentation (all questions and answers plus how to interpret the scale), a general web search with the name of the scale and “.pdf” may bring you to what you need. Or, to get professional help with finding information, always ask a librarian!

Unfortunately, these approaches do not guarantee that you will be able to view the scale itself or get information on how it is interpreted. Many scales cost money to use and may require training to properly administer. You may also find scales that are related to your variable but would need to be slightly modified to match your study’s needs. You could adapt a scale to fit your study, however changing even small parts of a scale can influence its accuracy and consistency. While it is perfectly acceptable in student projects to adapt a scale without testing it first (time may not allow you to do so), pilot testing is always recommended for adapted scales, and researchers seeking to draw valid conclusions and publish their results must take this additional step.

An index is a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas. It is different from a scale. Scales also aggregate measures; however, these measures examine different dimensions or the same dimension of a single construct. A well-known example of an index is the consumer price index (CPI), which is computed every month by the Bureau of Labor Statistics of the U.S. Department of Labor. The CPI is a measure of how much consumers have to pay for goods and services (in general) and is divided into eight major categories (food and beverages, housing, apparel, transportation, healthcare, recreation, education and communication, and “other goods and services”), which are further subdivided into more than 200 smaller items. Each month, government employees call all over the country to get the current prices of more than 80,000 items. Using a complicated weighting scheme that takes into account the location and probability of purchase for each item, analysts then combine these prices into an overall index score using a series of formulas and rules.

Another example of an index is the Duncan Socioeconomic Index (SEI). This index is used to quantify a person’s socioeconomic status (SES) and is a combination of three concepts: income, education, and occupation. Income is measured in dollars, education in years or degrees achieved, and occupation is classified into categories or levels by status. These very different measures are combined to create an overall SES index score. However, SES index measurement has generated a lot of controversy and disagreement among researchers.

The process of creating an index is similar to that of a scale. First, conceptualize (define) the index and its constituent components. Though this appears simple, there may be a lot of disagreement on what components (concepts/constructs) should be included or excluded from an index. For instance, in the SES index, isn’t income correlated with education and occupation? And if so, should we include one component only or all three components? Reviewing the literature, using theories, and/or interviewing experts or key stakeholders may help resolve this issue. Second, operationalize and measure each component. For instance, how will you categorize occupations, particularly since some occupations may have changed with time (e.g., there were no Web developers before the Internet)? As we will see in step three below, researchers must create a rule or formula for calculating the index score. Again, this process may involve a lot of subjectivity, so validating the index score using existing or new data is important.

Scale and index development at often taught in their own course in doctoral education, so it is unreasonable for you to expect to develop a consistently accurate measure within the span of a week or two. Using available indices and scales is recommended for this reason.

Differences between scales and indices

Though indices and scales yield a single numerical score or value representing a concept of interest, they are different in many ways. First, indices often comprise components that are very different from each other (e.g., income, education, and occupation in the SES index) and are measured in different ways. Conversely, scales typically involve a set of similar items that use the same rating scale (such as a five-point Likert scale about customer satisfaction).

Second, indices often combine objectively measurable values such as prices or income, while scales are designed to assess subjective or judgmental constructs such as attitude, prejudice, or self-esteem. Some argue that the sophistication of the scaling methodology makes scales different from indexes, while others suggest that indexing methodology can be equally sophisticated. Nevertheless, indexes and scales are both essential tools in social science research.

Scales and indices seem like clean, convenient ways to measure different phenomena in social science, but just like with a lot of research, we have to be mindful of the assumptions and biases underneath. What if a scale or an index was developed using only White women as research participants? Is it going to be useful for other groups? It very well might be, but when using a scale or index on a group for whom it hasn’t been tested, it will be very important to evaluate the validity and reliability of the instrument, which we address in the rest of the chapter.

Finally, it’s important to note that while scales and indices are often made up of nominal or ordinal variables, when we analyze them into composite scores, we will treat them as interval/ratio variables.

  • Look back to your work from the previous section, are your variables unidimensional or multidimensional?
  • Describe the specific measures you will use (actual questions and response options you will use with participants) for each variable in your research question.
  • If you are using a measure developed by another researcher but do not have all of the questions, response options, and instructions needed to implement it, put it on your to-do list to get them.

what is quantitative research in social work

Step 3: How you will interpret your measures

The final stage of operationalization involves setting the rules for how the measure works and how the researcher should interpret the results. Sometimes, interpreting a measure can be incredibly easy. If you ask someone their age, you’ll probably interpret the results by noting the raw number (e.g., 22) someone provides and that it is lower or higher than other people’s ages. However, you could also recode that person into age categories (e.g., under 25, 20-29-years-old, generation Z, etc.). Even scales may be simple to interpret. If there is a scale of problem behaviors, one might simply add up the number of behaviors checked off–with a range from 1-5 indicating low risk of delinquent behavior, 6-10 indicating the student is moderate risk, etc. How you choose to interpret your measures should be guided by how they were designed, how you conceptualize your variables, the data sources you used, and your plan for analyzing your data statistically. Whatever measure you use, you need a set of rules for how to take any valid answer a respondent provides to your measure and interpret it in terms of the variable being measured.

For more complicated measures like scales, refer to the information provided by the author for how to interpret the scale. If you can’t find enough information from the scale’s creator, look at how the results of that scale are reported in the results section of research articles. For example, Beck’s Depression Inventory (BDI-II) uses 21 statements to measure depression and respondents rate their level of agreement on a scale of 0-3. The results for each question are added up, and the respondent is put into one of three categories: low levels of depression (1-16), moderate levels of depression (17-30), or severe levels of depression (31 and over).

One common mistake I see often is that students will introduce another variable into their operational definition. This is incorrect. Your operational definition should mention only one variable—the variable being defined. While your study will certainly draw conclusions about the relationships between variables, that’s not what operationalization is. Operationalization specifies what instrument you will use to measure your variable and how you plan to interpret the data collected using that measure.

Operationalization is probably the trickiest component of basic research methods, so please don’t get frustrated if it takes a few drafts and a lot of feedback to get to a workable definition. At the time of this writing, I am in the process of operationalizing the concept of “attitudes towards research methods.” Originally, I thought that I could gauge students’ attitudes toward research methods by looking at their end-of-semester course evaluations. As I became aware of the potential methodological issues with student course evaluations, I opted to use focus groups of students to measure their common beliefs about research. You may recall some of these opinions from Chapter 1 , such as the common beliefs that research is boring, useless, and too difficult. After the focus group, I created a scale based on the opinions I gathered, and I plan to pilot test it with another group of students. After the pilot test, I expect that I will have to revise the scale again before I can implement the measure in a real social work research project. At the time I’m writing this, I’m still not completely done operationalizing this concept.

  • Operationalization involves spelling out precisely how a concept will be measured.
  • Operational definitions must include the variable, the measure, and how you plan to interpret the measure.
  • There are four different levels of measurement: nominal, ordinal, interval, and ratio (in increasing order of specificity).
  • Scales and indices are common ways to collect information and involve using multiple indicators in measurement.
  • A key difference between a scale and an index is that a scale contains multiple indicators for one concept, whereas an indicator examines multiple concepts (components).
  • Using scales developed and refined by other researchers can improve the rigor of a quantitative study.

Use the research question that you developed in the previous chapters and find a related scale or index that researchers have used. If you have trouble finding the exact phenomenon you want to study, get as close as you can.

  • What is the level of measurement for each item on each tool? Take a second and think about why the tool’s creator decided to include these levels of measurement. Identify any levels of measurement you would change and why.
  • If these tools don’t exist for what you are interested in studying, why do you think that is?

11.3 Measurement quality

  • Define and describe the types of validity and reliability
  • Assess for systematic error

The previous chapter provided insight into measuring concepts in social work research. We discussed the importance of identifying concepts and their corresponding indicators as a way to help us operationalize them. In essence, we now understand that when we think about our measurement process, we must be intentional and thoughtful in the choices that we make. This section is all about how to judge the quality of the measures you’ve chosen for the key variables in your research question.

Reliability

First, let’s say we’ve decided to measure alcoholism by asking people to respond to the following question: Have you ever had a problem with alcohol? If we measure alcoholism this way, then it is likely that anyone who identifies as an alcoholic would respond “yes.” This may seem like a good way to identify our group of interest, but think about how you and your peer group may respond to this question. Would participants respond differently after a wild night out, compared to any other night? Could an infrequent drinker’s current headache from last night’s glass of wine influence how they answer the question this morning? How would that same person respond to the question before consuming the wine? In each cases, the same person might respond differently to the same question at different points, so it is possible that our measure of alcoholism has a reliability problem.  Reliability  in measurement is about consistency.

One common problem of reliability with social scientific measures is memory. If we ask research participants to recall some aspect of their own past behavior, we should try to make the recollection process as simple and straightforward for them as possible. Sticking with the topic of alcohol intake, if we ask respondents how much wine, beer, and liquor they’ve consumed each day over the course of the past 3 months, how likely are we to get accurate responses? Unless a person keeps a journal documenting their intake, there will very likely be some inaccuracies in their responses. On the other hand, we might get more accurate responses if we ask a participant how many drinks of any kind they have consumed in the past week.

Reliability can be an issue even when we’re not reliant on others to accurately report their behaviors. Perhaps a researcher is interested in observing how alcohol intake influences interactions in public locations. They may decide to conduct observations at a local pub by noting how many drinks patrons consume and how their behavior changes as their intake changes. What if the researcher has to use the restroom, and the patron next to them takes three shots of tequila during the brief period the researcher is away from their seat? The reliability of this researcher’s measure of alcohol intake depends on their ability to physically observe every instance of patrons consuming drinks. If they are unlikely to be able to observe every such instance, then perhaps their mechanism for measuring this concept is not reliable.

The following subsections describe the types of reliability that are important for you to know about, but keep in mind that you may see other approaches to judging reliability mentioned in the empirical literature.

Test-retest reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the  same group of people at a later time. Unlike an experiment, you aren’t giving participants an intervention but trying to establish a reliable baseline of the variable you are measuring. Once you have these two measurements, you then look at the correlation between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing the correlation coefficient. Figure 11.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. The correlation coefficient for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

A scatterplot with scores at time 1 on the x-axis and scores at time 2 on the y-axis, both ranging from 0 to 30. The dots on the scatter plot indicate a strong, positive correlation.

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal consistency

Another kind of reliability is internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials. A specific statistical test known as Cronbach’s Alpha provides a way to measure how well each question of a scale is related to the others.

Interrater reliability

Many behavioral measures involve significant judgment on the part of an observer or a rater. Interrater reliability is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does, in fact, have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other.

what is quantitative research in social work

Validity , another key element of assessing measurement quality, is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure.

Face validity

Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behavior, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. For example, the items “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches those of individuals who tend to suppress their aggression.

Content validity

Content validity is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that they think positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion validity

Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validity ; however, when the criterion is measured at some point in the future (after the construct has been measured), it is referred to as predictive validity (because scores on the measure have “predicted” a future outcome).

Discriminant validity

Discriminant validity , on the other hand, is the extent to which scores on a measure are not  correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

Increasing the reliability and validity of measures

We have reviewed the types of errors and how to evaluate our measures based on reliability and validity considerations. However, what can we do while selecting or creating our tool so that we minimize the potential of errors? Many of our options were covered in our discussion about reliability and validity. Nevertheless, the following table provides a quick summary of things that you should do when creating or selecting a measurement tool. While not all of these will be feasible in your project, it is important to include easy-to-implement measures in your research context.

Make sure that you engage in a rigorous literature review so that you understand the concept that you are studying. This means understanding the different ways that your concept may manifest itself. This review should include a search for existing instruments. [12]

  • Do you understand all the dimensions of your concept? Do you have a good understanding of the content dimensions of your concept(s)?
  • What instruments exist? How many items are on the existing instruments? Are these instruments appropriate for your population?
  • Are these instruments standardized? Note: If an instrument is standardized, that means it has been rigorously studied and tested.

Consult content experts to review your instrument. This is a good way to check the face validity of your items. Additionally, content experts can also help you understand the content validity. [13]

  • Do you have access to a reasonable number of content experts? If not, how can you locate them?
  • Did you provide a list of critical questions for your content reviewers to use in the reviewing process?

Pilot test your instrument on a sufficient number of people and get detailed feedback. [14] Ask your group to provide feedback on the wording and clarity of items. Keep detailed notes and make adjustments BEFORE you administer your final tool.

  • How many people will you use in your pilot testing?
  • How will you set up your pilot testing so that it mimics the actual process of administering your tool?
  • How will you receive feedback from your pilot testing group? Have you provided a list of questions for your group to think about?

Provide training for anyone collecting data for your project. [15] You should provide those helping you with a written research protocol that explains all of the steps of the project. You should also problem solve and answer any questions that those helping you may have. This will increase the chances that your tool will be administered in a consistent manner.

  • How will you conduct your orientation/training? How long will it be? What modality?
  • How will you select those who will administer your tool? What qualifications do they need?

When thinking of items, use a higher level of measurement, if possible. [16] This will provide more information and you can always downgrade to a lower level of measurement later.

  • Have you examined your items and the levels of measurement?
  • Have you thought about whether you need to modify the type of data you are collecting? Specifically, are you asking for information that is too specific (at a higher level of measurement) which may reduce participants’ willingness to participate?

Use multiple indicators for a variable. [17] Think about the number of items that you will include in your tool.

  • Do you have enough items? Enough indicators? The correct indicators?

Conduct an item-by-item assessment of multiple-item measures. [18] When you do this assessment, think about each word and how it changes the meaning of your item.

  • Are there items that are redundant? Do you need to modify, delete, or add items?

what is quantitative research in social work

Types of error

As you can see, measures never perfectly describe what exists in the real world. Good measures demonstrate validity and reliability but will always have some degree of error. Systematic error (also called bias) causes our measures to consistently output incorrect data in one direction or another on a measure, usually due to an identifiable process. Imagine you created a measure of height, but you didn’t put an option for anyone over six feet tall. If you gave that measure to your local college or university, some of the taller students might not be measured accurately. In fact, you would be under the mistaken impression that the tallest person at your school was six feet tall, when in actuality there are likely people taller than six feet at your school. This error seems innocent, but if you were using that measure to help you build a new building, those people might hit their heads!

A less innocent form of error arises when researchers word questions in a way that might cause participants to think one answer choice is preferable to another. For example, if I were to ask you “Do you think global warming is caused by human activity?” you would probably feel comfortable answering honestly. But what if I asked you “Do you agree with 99% of scientists that global warming is caused by human activity?” Would you feel comfortable saying no, if that’s what you honestly felt? I doubt it. That is an example of a  leading question , a question with wording that influences how a participant responds. We’ll discuss leading questions and other problems in question wording in greater detail in Chapter 12 .

In addition to error created by the researcher, your participants can cause error in measurement. Some people will respond without fully understanding a question, particularly if the question is worded in a confusing way. Let’s consider another potential source or error. If we asked people if they always washed their hands after using the bathroom, would we expect people to be perfectly honest? Polling people about whether they wash their hands after using the bathroom might only elicit what people would like others to think they do, rather than what they actually do. This is an example of  social desirability bias , in which participants in a research study want to present themselves in a positive, socially desirable way to the researcher. People in your study will want to seem tolerant, open-minded, and intelligent, but their true feelings may be closed-minded, simple, and biased. Participants may lie in this situation. This occurs often in political polling, which may show greater support for a candidate from a minority race, gender, or political party than actually exists in the electorate.

A related form of bias is called  acquiescence bias , also known as “yea-saying.” It occurs when people say yes to whatever the researcher asks, even when doing so contradicts previous answers. For example, a person might say yes to both “I am a confident leader in group discussions” and “I feel anxious interacting in group discussions.” Those two responses are unlikely to both be true for the same person. Why would someone do this? Similar to social desirability, people want to be agreeable and nice to the researcher asking them questions or they might ignore contradictory feelings when responding to each question. You could interpret this as someone saying “yeah, I guess.” Respondents may also act on cultural reasons, trying to “save face” for themselves or the person asking the questions. Regardless of the reason, the results of your measure don’t match what the person truly feels.

So far, we have discussed sources of error that come from choices made by respondents or researchers. Systematic errors will result in responses that are incorrect in one direction or another. For example, social desirability bias usually means that the number of people who say  they will vote for a third party in an election is greater than the number of people who actually vote for that candidate. Systematic errors such as these can be reduced, but random error can never be eliminated. Unlike systematic error, which biases responses consistently in one direction or another,  random error  is unpredictable and does not consistently result in scores that are consistently higher or lower on a given measure. Instead, random error is more like statistical noise, which will likely average out across participants.

Random error is present in any measurement. If you’ve ever stepped on a bathroom scale twice and gotten two slightly different results, maybe a difference of a tenth of a pound, then you’ve experienced random error. Maybe you were standing slightly differently or had a fraction of your foot off of the scale the first time. If you were to take enough measures of your weight on the same scale, you’d be able to figure out your true weight. In social science, if you gave someone a scale measuring depression on a day after they lost their job, they would likely score differently than if they had just gotten a promotion and a raise. Even if the person were clinically depressed, our measure is subject to influence by the random occurrences of life. Thus, social scientists speak with humility about our measures. We are reasonably confident that what we found is true, but we must always acknowledge that our measures are only an approximation of reality.

Humility is important in scientific measurement, as errors can have real consequences. At the time I’m writing this, my wife and I are expecting our first child. Like most people, we used a pregnancy test from the pharmacy. If the test said my wife was pregnant when she was not pregnant, that would be a false positive . On the other hand, if the test indicated that she was not pregnant when she was in fact pregnant, that would be a  false negative . Even if the test is 99% accurate, that means that one in a hundred women will get an erroneous result when they use a home pregnancy test. For us, a false positive would have been initially exciting, then devastating when we found out we were not having a child. A false negative would have been disappointing at first and then quite shocking when we found out we were indeed having a child. While both false positives and false negatives are not very likely for home pregnancy tests (when taken correctly), measurement error can have consequences for the people being measured.

  • Reliability is a matter of consistency.
  • Validity is a matter of accuracy.
  • There are many types of validity and reliability.
  • Systematic error may arise from the researcher, participant, or measurement instrument.
  • Systematic error biases results in a particular direction, whereas random error can be in any direction.
  • All measures are prone to error and should interpreted with humility.

Use the measurement tools you located in the previous exercise. Evaluate the reliability and validity of these tools. Hint: You will need to go into the literature to “research” these tools.

  • Provide a clear statement regarding the reliability and validity of these tools. What strengths did you notice? What were the limitations?
  • Think about your target population . Are there changes that need to be made in order for one of these tools to be appropriate for your population?
  • If you decide to create your own tool, how will you assess its validity and reliability?

11.4 Ethical and social justice considerations

  • Identify potential cultural, ethical, and social justice issues in measurement.

With your variables operationalized, it’s time to take a step back and look at how measurement in social science impact our daily lives. As we will see, how we measure things is both shaped by power arrangements inside our society, and more insidiously, by establishing what is scientifically true, measures have their own power to influence the world. Just like reification in the conceptual world, how we operationally define concepts can reinforce or fight against oppressive forces.

what is quantitative research in social work

Data equity

How we decide to measure our variables determines what kind of data we end up with in our research project. Because scientific processes are a part of our sociocultural context, the same biases and oppressions we see in the real world can be manifested or even magnified in research data. Jagadish and colleagues (2021) [19] presents four dimensions of data equity that are relevant to consider: in representation of non-dominant groups within data sets; in how data is collected, analyzed, and combined across datasets; in equitable and participatory access to data, and finally in the outcomes associated with the data collection. Historically, we have mostly focused on the outcomes of measures producing outcomes that are biased in one way or another, and this section reviews many such examples. However, it is important to note that equity must also come from designing measures that respond to questions like:

  • Are groups historically suppressed from the data record represented in the sample?
  • Are equity data gathered by researchers and used to uncover and quantify inequity?
  • Are the data accessible across domains and levels of expertise, and can community members participate in the design, collection, and analysis of the public data record?
  • Are the data collected used to monitor and mitigate inequitable impacts?

So, it’s not just about whether measures work for one population for another. Data equity is about the context in which data are created from how we measure people and things. We agree with these authors that data equity should be considered within the context of automated decision-making systems and recognizing a broader literature around the role of administrative systems in creating and reinforcing discrimination. To combat the inequitable processes and outcomes we describe below, researchers must foreground equity as a core component of measurement.

Flawed measures & missing measures

At the end of every semester, students in just about every university classroom in the United States complete similar student evaluations of teaching (SETs). Since every student is likely familiar with these, we can recognize many of the concepts we discussed in the previous sections. There are number of rating scale questions that ask you to rate the professor, class, and teaching effectiveness on a scale of 1-5. Scores are averaged across students and used to determine the quality of teaching delivered by the faculty member. SETs scores are often a principle component of how faculty are reappointed to teaching positions. Would it surprise you to learn that student evaluations of teaching are of questionable quality? If your instructors are assessed with a biased or incomplete measure, how might that impact your education?

Most often, student scores are averaged across questions and reported as a final average. This average is used as one factor, often the most important factor, in a faculty member’s reappointment to teaching roles. We learned in this chapter that rating scales are ordinal, not interval or ratio, and the data are categories not numbers. Although rating scales use a familiar 1-5 scale, the numbers 1, 2, 3, 4, & 5 are really just helpful labels for categories like “excellent” or “strongly agree.” If we relabeled these categories as letters (A-E) rather than as numbers (1-5), how would you average them?

Averaging ordinal data is methodologically dubious, as the numbers are merely a useful convention. As you will learn in Chapter 14 , taking the median value is what makes the most sense with ordinal data. Median values are also less sensitive to outliers. So, a single student who has strong negative or positive feelings towards the professor could bias the class’s SETs scores higher or lower than what the “average” student in the class would say, particularly for classes with few students or in which fewer students completed evaluations of their teachers.

We care about teaching quality because more effective teachers will produce more knowledgeable and capable students. However, student evaluations of teaching are not particularly good indicators of teaching quality and are not associated with the independently measured learning gains of students (i.e., test scores, final grades) (Uttl et al., 2017). [20] This speaks to the lack of criterion validity. Higher teaching quality should be associated with better learning outcomes for students, but across multiple studies stretching back years, there is no association that cannot be better explained by other factors. To be fair, there are scholars who find that SETs are valid and reliable. For a thorough defense of SETs as well as a historical summary of the literature see Benton & Cashin (2012). [21]

Even though student evaluations of teaching often contain dozens of questions, researchers often find that the questions are so highly interrelated that one concept (or factor, as it is called in a factor analysis ) explains a large portion of the variance in teachers’ scores on student evaluations (Clayson, 2018). [22] Personally, I believe based on completing SETs myself that factor is probably best conceptualized as student satisfaction, which is obviously worthwhile to measure, but is conceptually quite different from teaching effectiveness or whether a course achieved its intended outcomes. The lack of a clear operational and conceptual definition for the variable or variables being measured in student evaluations of teaching also speaks to a lack of content validity. Researchers check content validity by comparing the measurement method with the conceptual definition, but without a clear conceptual definition of the concept measured by student evaluations of teaching, it’s not clear how we can know our measure is valid. Indeed, the lack of clarity around what is being measured in teaching evaluations impairs students’ ability to provide reliable and valid evaluations. So, while many researchers argue that the class average SETs scores are reliable in that they are consistent over time and across classes, it is unclear what exactly is being measured even if it is consistent (Clayson, 2018). [23]

As a faculty member, there are a number of things I can do to influence my evaluations and disrupt validity and reliability. Since SETs scores are associated with the grades students perceive they will receive (e.g., Boring et al., 2016), [24] guaranteeing everyone a final grade of A in my class will likely increase my SETs scores and my chances at tenure and promotion. I could time an email reminder to complete SETs with releasing high grades for a major assignment to boost my evaluation scores. On the other hand, student evaluations might be coincidentally timed with poor grades or difficult assignments that will bias student evaluations downward. Students may also infer I am manipulating them and give me lower SET scores as a result. To maximize my SET scores and chances and promotion, I also need to select which courses I teach carefully. Classes that are more quantitatively oriented generally receive lower ratings than more qualitative and humanities-driven classes, which makes my decision to teach social work research a poor strategy (Uttl & Smibert, 2017). [25] The only manipulative strategy I will admit to using is bringing food (usually cookies or donuts) to class during the period in which students are completing evaluations. Measurement is impacted by context.

As a white cis-gender male educator, I am adversely impacted by SETs because of their sketchy validity, reliability, and methodology. The other flaws with student evaluations actually help me while disadvantaging teachers from oppressed groups. Heffernan (2021) [26] provides a comprehensive overview of the sexism, racism, ableism, and prejudice baked into student evaluations:

“In all studies relating to gender, the analyses indicate that the highest scores are awarded in subjects filled with young, white, male students being taught by white English first language speaking, able-bodied, male academics who are neither too young nor too old (approx. 35–50 years of age), and who the students believe are heterosexual. Most deviations from this scenario in terms of student and academic demographics equates to lower SET scores. These studies thus highlight that white, able-bodied, heterosexual, men of a certain age are not only the least affected, they benefit from the practice. When every demographic group who does not fit this image is significantly disadvantaged by SETs, these processes serve to further enhance the position of the already privileged” (p. 5).

The staggering consistency of studies examining prejudice in SETs has led to some rather superficial reforms like reminding students to not submit racist or sexist responses in the written instructions given before SETs. Yet, even though we know that SETs are systematically biased against women, people of color, and people with disabilities, the overwhelming majority of universities in the United States continue to use them to evaluate faculty for promotion or reappointment. From a critical perspective, it is worth considering why university administrators continue to use such a biased and flawed instrument. SETs produce data that make it easy to compare faculty to one another and track faculty members over time. Furthermore, they offer students a direct opportunity to voice their concerns and highlight what went well.

As the people with the greatest knowledge about what happened in the classroom as whether it met their expectations, providing students with open-ended questions is the most productive part of SETs. Personally, I have found focus groups written, facilitated, and analyzed by student researchers to be more insightful than SETs. MSW student activists and leaders may look for ways to evaluate faculty that are more methodologically sound and less systematically biased, creating institutional change by replacing or augmenting traditional SETs in their department. There is very rarely student input on the criteria and methodology for teaching evaluations, yet students are the most impacted by helpful or harmful teaching practices.

Students should fight for better assessment in the classroom because well-designed assessments provide documentation to support more effective teaching practices and discourage unhelpful or discriminatory practices. Flawed assessments like SETs, can lead to a lack of information about problems with courses, instructors, or other aspects of the program. Think critically about what data your program uses to gauge its effectiveness. How might you introduce areas of student concern into how your program evaluates itself? Are there issues with food or housing insecurity, mentorship of nontraditional and first generation students, or other issues that faculty should consider when they evaluate their program? Finally, as you transition into practice, think about how your agency measures its impact and how it privileges or excludes client and community voices in the assessment process.

Let’s consider an example from social work practice. Let’s say you work for a mental health organization that serves youth impacted by community violence. How should you measure the impact of your services on your clients and their community? Schools may be interested in reducing truancy, self-injury, or other behavioral concerns. However, by centering delinquent behaviors in how we measure our impact, we may be inattentive to the role of trauma, family dynamics, and other cognitive and social processes beyond “delinquent behavior.” Indeed, we may bias our interventions by focusing on things that are not as important to clients’ needs. Social workers want to make sure their programs are improving over time, and we rely on our measures to indicate what to change and what to keep. If our measures present a partial or flawed view, we lose our ability to establish and act on scientific truths.

While writing this section, one of the authors wrote this commentary article addressing potential racial bias in social work licensing exams. If you are interested in an example of missing or flawed measures that relates to systems your social work practice is governed by (rather than SETs which govern our practice in higher education) check it out!

You may also be interested in similar arguments against the standard grading scale (A-F), and why grades (numerical, letter, etc.) do not do a good job of measuring learning. Think critically about the role that grades play in your life as a student, your self-concept, and your relationships with teachers. Your test and grade anxiety is due in part to how your learning is measured. Those measurements end up becoming an official record of your scholarship and allow employers or funders to compare you to other scholars. The stakes for measurement are the same for participants in your research study.

what is quantitative research in social work

Self-reflection and measurement

Student evaluations of teaching are just like any other measure. How we decide to measure what we are researching is influenced by our backgrounds, including our culture, implicit biases, and individual experiences. For me as a middle-class, cisgender white woman, the decisions I make about measurement will probably default to ones that make the most sense to me and others like me, and thus measure characteristics about us most accurately if I don’t think carefully about it. There are major implications for research here because this could affect the validity of my measurements for other populations.

This doesn’t mean that standardized scales or indices, for instance, won’t work for diverse groups of people. What it means is that researchers must not ignore difference in deciding how to measure a variable in their research. Doing so may serve to push already marginalized people further into the margins of academic research and, consequently, social work intervention. Social work researchers, with our strong orientation toward celebrating difference and working for social justice, are obligated to keep this in mind for ourselves and encourage others to think about it in their research, too.

This involves reflecting on what we are measuring, how we are measuring, and why we are measuring. Do we have biases that impacted how we operationalized our concepts? Did we include stakeholders and gatekeepers in the development of our concepts? This can be a way to gain access to vulnerable populations. What feedback did we receive on our measurement process and how was it incorporated into our work? These are all questions we should ask as we are thinking about measurement. Further, engaging in this intentionally reflective process will help us maximize the chances that our measurement will be accurate and as free from bias as possible.

The NASW Code of Ethics discusses social work research and the importance of engaging in practices that do not harm participants. This is especially important considering that many of the topics studied by social workers are those that are disproportionately experienced by marginalized and oppressed populations. Some of these populations have had negative experiences with the research process: historically, their stories have been viewed through lenses that reinforced the dominant culture’s standpoint. Thus, when thinking about measurement in research projects, we must remember that the way in which concepts or constructs are measured will impact how marginalized or oppressed persons are viewed. It is important that social work researchers examine current tools to ensure appropriateness for their population(s). Sometimes this may require researchers to use existing tools. Other times, this may require researchers to adapt existing measures or develop completely new measures in collaboration with community stakeholders. In summary, the measurement protocols selected should be tailored and attentive to the experiences of the communities to be studied.

Unfortunately, social science researchers do not do a great job of sharing their measures in a way that allows social work practitioners and administrators to use them to evaluate the impact of interventions and programs on clients. Few scales are published under an open copyright license that allows other people to view it for free and share it with others. Instead, the best way to find a scale mentioned in an article is often to simply search for it in Google with “.pdf” or “.docx” in the query to see if someone posted a copy online (usually in violation of copyright law). As we discussed in Chapter 4 , this is an issue of information privilege, or the structuring impact of oppression and discrimination on groups’ access to and use of scholarly information. As a student at a university with a research library, you can access the Mental Measurement Yearbook to look up scales and indexes that measure client or program outcomes while researchers unaffiliated with university libraries cannot do so. Similarly, the vast majority of scholarship in social work and allied disciplines does not share measures, data, or other research materials openly, a best practice in open and collaborative science. It is important to underscore these structural barriers to using valid and reliable scales in social work practice. An invalid or unreliable outcome test may cause ineffective or harmful programs to persist or may worsen existing prejudices and oppressions experienced by clients, communities, and practitioners.

But it’s not just about reflecting and identifying problems and biases in our measurement, operationalization, and conceptualization—what are we going to  do about it? Consider this as you move through this book and become a more critical consumer of research. Sometimes there isn’t something you can do in the immediate sense—the literature base at this moment just is what it is. But how does that inform what you will do later?

A place to start: Stop oversimplifying race

We will address many more of the critical issues related to measurement in the next chapter. One way to get started in bringing cultural awareness to scientific measurement is through a critical examination of how we analyze race quantitatively. There are many important methodological objections to how we measure the impact of race. We encourage you to watch Dr. Abigail Sewell’s three-part workshop series called “Nested Models for Critical Studies of Race & Racism” for the Inter-university Consortium for Political and Social Research (ICPSR). She discusses how to operationalize and measure inequality, racism, and intersectionality and critiques researchers’ attempts to oversimplify or overlook racism when we measure concepts in social science. If you are interested in developing your social work research skills further, consider applying for financial support from your university to attend an ICPSR summer seminar like Dr. Sewell’s where you can receive more advanced and specialized training in using research for social change.

  • Part 1: Creating Measures of Supraindividual Racism (2-hour video)
  • Part 2: Evaluating Population Risks of Supraindividual Racism (2-hour video)
  • Part 3: Quantifying Intersectionality (2-hour video)
  • Social work researchers must be attentive to personal and institutional biases in the measurement process that affect marginalized groups.
  • What is measured and how it is measured is shaped by power, and social workers must be critical and self-reflective in their research projects.

Think about your current research question and the tool(s) that you will use to gather data. Even if you haven’t chosen your tools yet, think of some that you have encountered in the literature so far.

  • How does your positionality and experience shape what variables you are choosing to measure and how you measure them?
  • Evaluate the measures in your study for potential biases.
  • If you are using measures developed by another researcher, investigate whether it is valid and reliable in other studies across cultures.
  • Milkie, M. A., & Warner, C. H. (2011). Classroom learning environments and the mental health of first grade children. Journal of Health and Social Behavior, 52 , 4–22 ↵
  • Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science . San Francisco, CA: Chandler Publishing Company. ↵
  • Earl Babbie offers a more detailed discussion of Kaplan’s work in his text. You can read it in: Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • In this chapter, we will use the terms concept and construct interchangeably. While each term has a distinct meaning in research conceptualization, we do not believe this distinction is important enough to warrant discussion in this chapter. ↵
  • Wong, Y. J., Steinfeldt, J. A., Speight, Q. L., & Hickman, S. J. (2010). Content analysis of Psychology of men & masculinity (2000–2008).  Psychology of Men & Masculinity ,  11 (3), 170. ↵
  • Kimmel, M. (2000).  The  gendered society . New York, NY: Oxford University Press; Kimmel, M. (2008). Masculinity. In W. A. Darity Jr. (Ed.),  International  encyclopedia of the social sciences  (2nd ed., Vol. 5, p. 1–5). Detroit, MI: Macmillan Reference USA ↵
  • Kimmel, M. & Aronson, A. B. (2004).  Men and masculinities: A-J . Denver, CO: ABL-CLIO. ↵
  • Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format.  American Journal of Political Science, 27 (3), 941-964. ↵
  • Likert, R. (1932). A technique for the measurement of attitudes.  Archives of Psychology,140 , 1–55. ↵
  • Stevens, S. S. (1946). On the Theory of Scales of Measurement.  Science ,  103 (2684), 677-680. ↵
  • Sullivan G. M. (2011). A primer on the validity of assessment instruments. Journal of graduate medical education, 3 (2), 119–120. doi:10.4300/JGME-D-11-00075.1 ↵
  • Engel, R. & Schutt, R. (2013). The practice of research in social work (3rd. ed.) . Thousand Oaks, CA: SAGE. ↵
  • Engel, R. & Schutt, R. (2013). The practice of research in social work (3rd. ed.). Thousand Oaks, CA: SAGE. ↵
  • Jagadish, H. V., Stoyanovich, J., & Howe, B. (2021). COVID-19 Brings Data Equity Challenges to the Fore. Digital Government: Research and Practice ,  2 (2), 1-7. ↵
  • Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation ,  54 , 22-42. ↵
  • Benton, S. L., & Cashin, W. E. (2014). Student ratings of instruction in college and university courses. In Higher education: Handbook of theory and research  (pp. 279-326). Springer, Dordrecht. ↵
  • Clayson, D. E. (2018). Student evaluation of teaching and matters of reliability.  Assessment & Evaluation in Higher Education ,  43 (4), 666-681. ↵
  • Clayson, D. E. (2018). Student evaluation of teaching and matters of reliability. Assessment & Evaluation in Higher Education ,  43 (4), 666-681. ↵
  • Boring, A., Ottoboni, K., & Stark, P. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness.  ScienceOpen Research . ↵
  • Uttl, B., & Smibert, D. (2017). Student evaluations of teaching: teaching quantitative courses can be hazardous to one’s career. Peer Journal ,  5 , e3299. ↵
  • Heffernan, T. (2021). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching.  Assessment & Evaluation in Higher Education , 1-11. ↵

The process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena under investigation in a research study.

In measurement, conditions that are easy to identify and verify through direct observation.

In measurement, conditions that are subtle and complex that we must use existing knowledge and intuition to define.

Conditions that are not directly observable and represent states of being, experiences, and ideas.

A mental image that summarizes a set of similar observations, feelings, or ideas

developing clear, concise definitions for the key concepts in a research question

concepts that are comprised of multiple elements

concepts that are expected to have a single underlying dimension

assuming that abstract concepts exist in some concrete, tangible way

process by which researchers spell out precisely how a concept will be measured in their study

Clues that demonstrate the presence, intensity, or other aspects of a concept in the real world

unprocessed data that researchers can analyze using quantitative and qualitative methods (e.g., responses to a survey or interview transcripts)

a characteristic that does not change in a study

The characteristics that make up a variable

variables whose values are organized into mutually exclusive groups but whose numerical values cannot be used in mathematical operations.

variables whose values are mutually exclusive and can be used in mathematical operations

The lowest level of measurement; categories cannot be mathematically ranked, though they are exhaustive and mutually exclusive

Exhaustive categories are options for closed ended questions that allow for every possible response (no one should feel like they can't find the answer for them).

Mutually exclusive categories are options for closed ended questions that do not overlap, so people only fit into one category or another, not both.

Level of measurement that follows nominal level. Has mutually exclusive categories and a hierarchy (rank order), but we cannot calculate a mathematical distance between attributes.

An ordered set of responses that participants must choose from.

A level of measurement that is continuous, can be rank ordered, is exhaustive and mutually exclusive, and for which the distance between attributes is known to be equal. But for which there is no zero point.

The highest level of measurement. Denoted by mutually exclusive categories, a hierarchy (order), values can be added, subtracted, multiplied, and divided, and the presence of an absolute zero.

measuring people’s attitude toward something by assessing their level of agreement with several statements about it

Composite (multi-item) scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites.

A composite scale using a series of items arranged in increasing order of intensity of the construct of interest, from least intense to most intense.

measurements of variables based on more than one one indicator

An empirical structure for measuring items or indicators of the multiple dimensions of a concept.

a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas

The ability of a measurement tool to measure a phenomenon the same way, time after time. Note: Reliability does not imply validity.

The extent to which scores obtained on a scale or other measure are consistent across time

The consistency of people’s responses across the items on a multiple-item measure. Responses about the same underlying construct should be correlated, though not perfectly.

The extent to which different observers are consistent in their assessment or rating of a particular characteristic or item.

The extent to which the scores from a measure represent the variable they are intended to.

The extent to which a measurement method appears “on its face” to measure the construct of interest

The extent to which a measure “covers” the construct of interest, i.e., it's comprehensiveness to measure the construct.

The extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with.

A type of criterion validity. Examines how well a tool provides the same scores as an already existing tool administered at the same point in time.

A type of criterion validity that examines how well your tool predicts a future criterion.

The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.

(also known as bias) refers to when a measure consistently outputs incorrect data, usually in one direction and due to an identifiable process

When a participant's answer to a question is altered due to the way in which a question is written. In essence, the question leads the participant to answer in a specific way.

Social desirability bias occurs when we create questions that lead respondents to answer in ways that don't reflect their genuine thoughts or feelings to avoid being perceived negatively.

In a measure, when people say yes to whatever the researcher asks, even when doing so contradicts previous answers.

Unpredictable error that does not result in scores that are consistently higher or lower on a given measure but are nevertheless inaccurate.

when a measure indicates the presence of a phenomenon, when in reality it is not present

when a measure does not indicate the presence of a phenomenon, when in reality it is present

the group of people whose needs your study addresses

The value in the middle when all our values are placed in numerical order. Also called the 50th percentile.

individuals or groups who have an interest in the outcome of the study you conduct

the people or organizations who control access to the population you want to study

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Quantitative research questionsQuantitative research hypotheses
Descriptive research questionsSimple hypothesis
Comparative research questionsComplex hypothesis
Relationship research questionsDirectional hypothesis
Non-directional hypothesis
Associative hypothesis
Causal hypothesis
Null hypothesis
Alternative hypothesis
Working hypothesis
Statistical hypothesis
Logical hypothesis
Hypothesis-testing
Qualitative research questionsQualitative research hypotheses
Contextual research questionsHypothesis-generating
Descriptive research questions
Evaluation research questions
Explanatory research questions
Exploratory research questions
Generative research questions
Ideological research questions
Ethnographic research questions
Phenomenological research questions
Grounded theory questions
Qualitative case study questions

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Quantitative research questions
Descriptive research question
- Measures responses of subjects to variables
- Presents variables to measure, analyze, or assess
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training?
Comparative research question
- Clarifies difference between one group with outcome variable and another group without outcome variable
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)?
- Compares the effects of variables
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells?
Relationship research question
- Defines trends, association, relationships, or interactions between dependent variable and independent variable
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic?

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Quantitative research hypotheses
Simple hypothesis
- Predicts relationship between single dependent variable and single independent variable
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered.
Complex hypothesis
- Foretells relationship between two or more independent and dependent variables
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable).
Directional hypothesis
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects.
Non-directional hypothesis
- Nature of relationship between two variables or exact study direction is not identified
- Does not involve a theory
Women and men are different in terms of helpfulness. (Exact study direction is not identified)
Associative hypothesis
- Describes variable interdependency
- Change in one variable causes change in another variable
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable).
Causal hypothesis
- An effect on dependent variable is predicted from manipulation of independent variable
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient.
Null hypothesis
- A negative statement indicating no relationship or difference between 2 variables
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2).
Alternative hypothesis
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2).
Working hypothesis
- A hypothesis that is initially accepted for further research to produce a feasible theory
Dairy cows fed with concentrates of different formulations will produce different amounts of milk.
Statistical hypothesis
- Assumption about the value of population parameter or relationship among several population characteristics
- Validity tested by a statistical experiment or analysis
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2.
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan.
Logical hypothesis
- Offers or proposes an explanation with limited or no extensive evidence
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less.
Hypothesis-testing (Quantitative hypothesis-testing research)
- Quantitative research uses deductive reasoning.
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses.

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative research questions
Contextual research question
- Ask the nature of what already exists
- Individuals or groups function to further clarify and understand the natural context of real-world problems
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems)
Descriptive research question
- Aims to describe a phenomenon
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities?
Evaluation research question
- Examines the effectiveness of existing practice or accepted frameworks
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility?
Explanatory research question
- Clarifies a previously studied phenomenon and explains why it occurs
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania?
Exploratory research question
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic?
Generative research question
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative?
Ideological research question
- Aims to advance specific ideas or ideologies of a position
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care?
Ethnographic research question
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis?
Phenomenological research question
- Knows more about the phenomena that have impacted an individual
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual)
Grounded theory question
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed?
Qualitative case study question
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions
- Considers how the phenomenon is influenced by its contextual situation.
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan?
Qualitative research hypotheses
Hypothesis-generating (Qualitative hypothesis-generating research)
- Qualitative research uses inductive reasoning.
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis.
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach.

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

VariablesUnclear and weak statement (Statement 1) Clear and good statement (Statement 2) Points to avoid
Research questionWhich is more effective between smoke moxibustion and smokeless moxibustion?“Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” 1) Vague and unfocused questions
2) Closed questions simply answerable by yes or no
3) Questions requiring a simple choice
HypothesisThe smoke moxibustion group will have higher cephalic presentation.“Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group.1) Unverifiable hypotheses
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group.2) Incompletely stated groups of comparison
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” 3) Insufficiently described variables or outcomes
Research objectiveTo determine which is more effective between smoke moxibustion and smokeless moxibustion.“The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” 1) Poor understanding of the research question and hypotheses
2) Insufficient description of population, variables, or study outcomes

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

VariablesUnclear and weak statement (Statement 1)Clear and good statement (Statement 2)Points to avoid
Research questionDoes disrespect and abuse (D&A) occur in childbirth in Tanzania?How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania?1) Ambiguous or oversimplistic questions
2) Questions unverifiable by data collection and analysis
HypothesisDisrespect and abuse (D&A) occur in childbirth in Tanzania.Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania.1) Statements simply expressing facts
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania.2) Insufficiently described concepts or variables
Research objectiveTo describe disrespect and abuse (D&A) in childbirth in Tanzania.“This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” 1) Statements unrelated to the research question and hypotheses
2) Unattainable or unexplorable objectives

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

4.3 Quantitative research questions

Learning objectives.

  • Describe how research questions for exploratory, descriptive, and explanatory quantitative questions differ and how to phrase them
  • Identify the differences between and provide examples of strong and weak explanatory research questions

Quantitative descriptive questions

The type of research you are conducting will impact the research question that you ask. Probably the easiest questions to think of are quantitative descriptive questions. For example, “What is the average student debt load of MSW students?” is a descriptive question—and an important one. We aren’t trying to build a causal relationship here. We’re simply trying to describe how much debt MSW students carry. Quantitative descriptive questions like this one are helpful in social work practice as part of community scans, in which human service agencies survey the various needs of the community they serve. If the scan reveals that the community requires more services related to housing, child care, or day treatment for people with disabilities, a nonprofit office can use the community scan to create new programs that meet a defined community need.

an illuminated street sign that reads "ask"

Quantitative descriptive questions will often ask for percentage, count the number of instances of a phenomenon, or determine an average. Descriptive questions may only include one variable, such as ours about debt load, or they may include multiple variables. Because these are descriptive questions, we cannot investigate causal relationships between variables. To do that, we need to use a quantitative explanatory question.

Quantitative explanatory questions

Most studies you read in the academic literature will be quantitative and explanatory. Why is that? Explanatory research tries to build something called nomothetic causal explanations.Matthew DeCarlo says “com[ing]up with a broad, sweeping explanation that is universally true for all people” is the hallmark of nomothetic causal relationships (DeCarlo, 2018, chapter 7.2, para 5 ). They are generalizable across space and time, so they are applicable to a wide audience. The editorial board of a journal wants to make sure their content will be useful to as many people as possible, so it’s not surprising that quantitative research dominates the academic literature.

Structurally, quantitative explanatory questions must contain an independent variable and dependent variable. Questions should ask about the relation between these variables. A standard format for an explanatory quantitative research question is: “What is the relation between [independent variable] and [dependent variable] for [target population]?” You should play with the wording for your research question, revising it as you see fit. The goal is to make the research question reflect what you really want to know in your study.

Let’s take a look at a few more examples of possible research questions and consider the relative strengths and weaknesses of each. Table 4.1 does just that. While reading the table, keep in mind that it only includes some of the most relevant strengths and weaknesses of each question. Certainly each question may have additional strengths and weaknesses not noted in the table.

Table 4.1 Sample research questions: Strengths and weaknesses
What are the internal and external effects/problems associated with children witnessing domestic violence? Written as a question Not clearly focused How does witnessing domestic violence impact a child’s romantic relationships in adulthood?
Considers relation among multiple concepts Not specific and clear about the concepts it addresses
Contains a population
What causes foster children who are transitioning to adulthood to become homeless, jobless, pregnant, unhealthy, etc.? Considers relation among multiple concepts Concepts are not specific and clear What is the relationship between sexual orientation or gender identity and homelessness for late adolescents in foster care?
Contains a population
Not written as a yes/no question
How does income inequality predict ambivalence in the Stereo Content Model using major U.S. cities as target populations? Written as a question Unclear wording How does income inequality affect ambivalence in high-density urban areas?
Considers relation among multiple concepts Population is unclear
Why are mental health rates higher in white foster children then African Americans and other races? Written as a question Concepts are not clear How does race impact rates of mental health diagnosis for children in foster care?
Not written as a yes/no question Does not contain a target population

Making it more specific

A good research question should also be specific and clear about the concepts it addresses. A group of students investigating gender and household tasks knows what they mean by “household tasks.” You likely also have an impression of what “household tasks” means. But are your definition and the students’ definition the same? A participant in their study may think that managing finances and performing home maintenance are household tasks, but the researcher may be interested in other tasks like childcare or cleaning. The only way to ensure your study stays focused and clear is to be specific about what you mean by a concept. The student in our example could pick a specific household task that was interesting to them or that the literature indicated was important—for example, childcare. Or, the student could have a broader view of household tasks, one that encompasses childcare, food preparation, financial management, home repair, and care for relatives. Any option is probably okay, as long as the researchers are clear on what they mean by “household tasks.”

Table 4.2 contains some “watch words” that indicate you may need to be more specific about the concepts in your research question.

Table 4.2 Explanatory research question “watch words”
Factors, Causes, Effects, Outcomes What causes or effects are you interested in? What causes and effects are important, based on the literature in your topic area? Try to choose one or a handful that you consider to be the most important.
Effective, Effectiveness, Useful, Efficient Effective at doing what? Effectiveness is meaningless on its own. What outcome should the program or intervention have? Reduced symptoms of a mental health issue? Better socialization?
Etc., and so forth Get more specific. You need to know enough about your topic to clearly address the concepts within it. Don’t assume that your reader understands what you mean by “and so forth.”

It can be challenging in social work research to be this specific, particularly when you are just starting out your investigation of the topic. If you’ve only read one or two articles on the topic, it can be hard to know what you are interested in studying. Broad questions like “What are the causes of chronic homelessness, and what can be done to prevent it?” are common at the beginning stages of a research project. However, social work research demands that you examine the literature on the topic and refine your question over time to be more specific and clear before you begin your study. Perhaps you want to study the effect of a specific anti-homelessness program that you found in the literature. Maybe there is a particular model to fighting homelessness, like Housing First or transitional housing that you want to investigate further. You may want to focus on a potential cause of homelessness such as LGBTQ discrimination that you find interesting or relevant to your practice. As you can see, the possibilities for making your question more specific are almost infinite.

Quantitative exploratory questions

In exploratory research, the researcher doesn’t quite know the lay of the land yet. If someone is proposing to conduct an exploratory quantitative project, the watch words highlighted in Table 4.2 are not problematic at all. In fact, questions such as “What factors influence the removal of children in child welfare cases?” are good because they will explore a variety of factors or causes. In this question, the independent variable is less clearly written, but the dependent variable, family preservation outcomes, is quite clearly written. The inverse can also be true. If we were to ask, “What outcomes are associated with family preservation services in child welfare?”, we would have a clear independent variable, family preservation services, but an unclear dependent variable, outcomes. Because we are only conducting exploratory research on a topic, we may not have an idea of what concepts may comprise our “outcomes” or “factors.” Only after interacting with our participants will we be able to understand which concepts are important.

Key Takeaways

  • Quantitative descriptive questions are helpful for community scans but cannot investigate causal relationships between variables.
  • Quantitative explanatory questions must include an independent and dependent variable.

Image attributions

Ask by terimakasih0 cc-0.

Guidebook for Social Work Literature Reviews and Research Questions Copyright © 2020 by Rebecca Mauldin and Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

University of Bristol Logo

  • Help & Terms of Use

Quantitative Research Methods for Social Work: Making Social Work Count

  • School for Policy Studies

Research output : Book/Report › Authored book

Original languageEnglish
Place of PublicationLondon
Publisher
Number of pages278
ISBN (Print)978-1-137-40026-0
Publication statusPublished - 1 Jan 2017
  • Quantitative Research Methods
  • Social Work
  • Persistent link

Fingerprint

  • Social Scientists Social Sciences 100%
  • UK Social Sciences 100%
  • Quantitative Research Method Social Sciences 100%
  • Administrative Structure Social Sciences 50%
  • Development Project Social Sciences 50%
  • Research Councils Social Sciences 50%
  • Social Research Social Sciences 50%
  • Economic Research Social Sciences 50%

T1 - Quantitative Research Methods for Social Work

T2 - Making Social Work Count

AU - Teater, Barbra

AU - Devaney, John

AU - Forrester, Donald

AU - Scourfield, Jonathan

AU - Carpenter, John

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

AB - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

KW - Quantitative Research Methods

KW - Social Work

M3 - Authored book

SN - 978-1-137-40026-0

BT - Quantitative Research Methods for Social Work

PB - Palgrave Macmillan

CY - London

Qualitative vs Quantitative Research Methods & Data Analysis

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.
  • Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed numerically. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.
  • Qualitative research gathers non-numerical data (words, images, sounds) to explore subjective experiences and attitudes, often via observation and interviews. It aims to produce detailed descriptions and uncover new insights about the studied phenomenon.

On This Page:

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography .

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis .

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Mixed methods research
  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

  • Privacy Policy

Research Method

Home » Quantitative Research – Methods, Types and Analysis

Quantitative Research – Methods, Types and Analysis

Table of Contents

What is Quantitative Research

Quantitative Research

Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions . This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected. It often involves the use of surveys, experiments, or other structured data collection methods to gather quantitative data.

Quantitative Research Methods

Quantitative Research Methods

Quantitative Research Methods are as follows:

Descriptive Research Design

Descriptive research design is used to describe the characteristics of a population or phenomenon being studied. This research method is used to answer the questions of what, where, when, and how. Descriptive research designs use a variety of methods such as observation, case studies, and surveys to collect data. The data is then analyzed using statistical tools to identify patterns and relationships.

Correlational Research Design

Correlational research design is used to investigate the relationship between two or more variables. Researchers use correlational research to determine whether a relationship exists between variables and to what extent they are related. This research method involves collecting data from a sample and analyzing it using statistical tools such as correlation coefficients.

Quasi-experimental Research Design

Quasi-experimental research design is used to investigate cause-and-effect relationships between variables. This research method is similar to experimental research design, but it lacks full control over the independent variable. Researchers use quasi-experimental research designs when it is not feasible or ethical to manipulate the independent variable.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This research method involves manipulating the independent variable and observing the effects on the dependent variable. Researchers use experimental research designs to test hypotheses and establish cause-and-effect relationships.

Survey Research

Survey research involves collecting data from a sample of individuals using a standardized questionnaire. This research method is used to gather information on attitudes, beliefs, and behaviors of individuals. Researchers use survey research to collect data quickly and efficiently from a large sample size. Survey research can be conducted through various methods such as online, phone, mail, or in-person interviews.

Quantitative Research Analysis Methods

Here are some commonly used quantitative research analysis methods:

Statistical Analysis

Statistical analysis is the most common quantitative research analysis method. It involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis can be used to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.

Regression Analysis

Regression analysis is a statistical technique used to analyze the relationship between one dependent variable and one or more independent variables. Researchers use regression analysis to identify and quantify the impact of independent variables on the dependent variable.

Factor Analysis

Factor analysis is a statistical technique used to identify underlying factors that explain the correlations among a set of variables. Researchers use factor analysis to reduce a large number of variables to a smaller set of factors that capture the most important information.

Structural Equation Modeling

Structural equation modeling is a statistical technique used to test complex relationships between variables. It involves specifying a model that includes both observed and unobserved variables, and then using statistical methods to test the fit of the model to the data.

Time Series Analysis

Time series analysis is a statistical technique used to analyze data that is collected over time. It involves identifying patterns and trends in the data, as well as any seasonal or cyclical variations.

Multilevel Modeling

Multilevel modeling is a statistical technique used to analyze data that is nested within multiple levels. For example, researchers might use multilevel modeling to analyze data that is collected from individuals who are nested within groups, such as students nested within schools.

Applications of Quantitative Research

Quantitative research has many applications across a wide range of fields. Here are some common examples:

  • Market Research : Quantitative research is used extensively in market research to understand consumer behavior, preferences, and trends. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform marketing strategies, product development, and pricing decisions.
  • Health Research: Quantitative research is used in health research to study the effectiveness of medical treatments, identify risk factors for diseases, and track health outcomes over time. Researchers use statistical methods to analyze data from clinical trials, surveys, and other sources to inform medical practice and policy.
  • Social Science Research: Quantitative research is used in social science research to study human behavior, attitudes, and social structures. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform social policies, educational programs, and community interventions.
  • Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.
  • Environmental Research: Quantitative research is used in environmental research to study the impact of human activities on the environment, assess the effectiveness of conservation strategies, and identify ways to reduce environmental risks. Researchers use statistical methods to analyze data from field studies, experiments, and other sources.

Characteristics of Quantitative Research

Here are some key characteristics of quantitative research:

  • Numerical data : Quantitative research involves collecting numerical data through standardized methods such as surveys, experiments, and observational studies. This data is analyzed using statistical methods to identify patterns and relationships.
  • Large sample size: Quantitative research often involves collecting data from a large sample of individuals or groups in order to increase the reliability and generalizability of the findings.
  • Objective approach: Quantitative research aims to be objective and impartial in its approach, focusing on the collection and analysis of data rather than personal beliefs, opinions, or experiences.
  • Control over variables: Quantitative research often involves manipulating variables to test hypotheses and establish cause-and-effect relationships. Researchers aim to control for extraneous variables that may impact the results.
  • Replicable : Quantitative research aims to be replicable, meaning that other researchers should be able to conduct similar studies and obtain similar results using the same methods.
  • Statistical analysis: Quantitative research involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis allows researchers to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.
  • Generalizability: Quantitative research aims to produce findings that can be generalized to larger populations beyond the specific sample studied. This is achieved through the use of random sampling methods and statistical inference.

Examples of Quantitative Research

Here are some examples of quantitative research in different fields:

  • Market Research: A company conducts a survey of 1000 consumers to determine their brand awareness and preferences. The data is analyzed using statistical methods to identify trends and patterns that can inform marketing strategies.
  • Health Research : A researcher conducts a randomized controlled trial to test the effectiveness of a new drug for treating a particular medical condition. The study involves collecting data from a large sample of patients and analyzing the results using statistical methods.
  • Social Science Research : A sociologist conducts a survey of 500 people to study attitudes toward immigration in a particular country. The data is analyzed using statistical methods to identify factors that influence these attitudes.
  • Education Research: A researcher conducts an experiment to compare the effectiveness of two different teaching methods for improving student learning outcomes. The study involves randomly assigning students to different groups and collecting data on their performance on standardized tests.
  • Environmental Research : A team of researchers conduct a study to investigate the impact of climate change on the distribution and abundance of a particular species of plant or animal. The study involves collecting data on environmental factors and population sizes over time and analyzing the results using statistical methods.
  • Psychology : A researcher conducts a survey of 500 college students to investigate the relationship between social media use and mental health. The data is analyzed using statistical methods to identify correlations and potential causal relationships.
  • Political Science: A team of researchers conducts a study to investigate voter behavior during an election. They use survey methods to collect data on voting patterns, demographics, and political attitudes, and analyze the results using statistical methods.

How to Conduct Quantitative Research

Here is a general overview of how to conduct quantitative research:

  • Develop a research question: The first step in conducting quantitative research is to develop a clear and specific research question. This question should be based on a gap in existing knowledge, and should be answerable using quantitative methods.
  • Develop a research design: Once you have a research question, you will need to develop a research design. This involves deciding on the appropriate methods to collect data, such as surveys, experiments, or observational studies. You will also need to determine the appropriate sample size, data collection instruments, and data analysis techniques.
  • Collect data: The next step is to collect data. This may involve administering surveys or questionnaires, conducting experiments, or gathering data from existing sources. It is important to use standardized methods to ensure that the data is reliable and valid.
  • Analyze data : Once the data has been collected, it is time to analyze it. This involves using statistical methods to identify patterns, trends, and relationships between variables. Common statistical techniques include correlation analysis, regression analysis, and hypothesis testing.
  • Interpret results: After analyzing the data, you will need to interpret the results. This involves identifying the key findings, determining their significance, and drawing conclusions based on the data.
  • Communicate findings: Finally, you will need to communicate your findings. This may involve writing a research report, presenting at a conference, or publishing in a peer-reviewed journal. It is important to clearly communicate the research question, methods, results, and conclusions to ensure that others can understand and replicate your research.

When to use Quantitative Research

Here are some situations when quantitative research can be appropriate:

  • To test a hypothesis: Quantitative research is often used to test a hypothesis or a theory. It involves collecting numerical data and using statistical analysis to determine if the data supports or refutes the hypothesis.
  • To generalize findings: If you want to generalize the findings of your study to a larger population, quantitative research can be useful. This is because it allows you to collect numerical data from a representative sample of the population and use statistical analysis to make inferences about the population as a whole.
  • To measure relationships between variables: If you want to measure the relationship between two or more variables, such as the relationship between age and income, or between education level and job satisfaction, quantitative research can be useful. It allows you to collect numerical data on both variables and use statistical analysis to determine the strength and direction of the relationship.
  • To identify patterns or trends: Quantitative research can be useful for identifying patterns or trends in data. For example, you can use quantitative research to identify trends in consumer behavior or to identify patterns in stock market data.
  • To quantify attitudes or opinions : If you want to measure attitudes or opinions on a particular topic, quantitative research can be useful. It allows you to collect numerical data using surveys or questionnaires and analyze the data using statistical methods to determine the prevalence of certain attitudes or opinions.

Purpose of Quantitative Research

The purpose of quantitative research is to systematically investigate and measure the relationships between variables or phenomena using numerical data and statistical analysis. The main objectives of quantitative research include:

  • Description : To provide a detailed and accurate description of a particular phenomenon or population.
  • Explanation : To explain the reasons for the occurrence of a particular phenomenon, such as identifying the factors that influence a behavior or attitude.
  • Prediction : To predict future trends or behaviors based on past patterns and relationships between variables.
  • Control : To identify the best strategies for controlling or influencing a particular outcome or behavior.

Quantitative research is used in many different fields, including social sciences, business, engineering, and health sciences. It can be used to investigate a wide range of phenomena, from human behavior and attitudes to physical and biological processes. The purpose of quantitative research is to provide reliable and valid data that can be used to inform decision-making and improve understanding of the world around us.

Advantages of Quantitative Research

There are several advantages of quantitative research, including:

  • Objectivity : Quantitative research is based on objective data and statistical analysis, which reduces the potential for bias or subjectivity in the research process.
  • Reproducibility : Because quantitative research involves standardized methods and measurements, it is more likely to be reproducible and reliable.
  • Generalizability : Quantitative research allows for generalizations to be made about a population based on a representative sample, which can inform decision-making and policy development.
  • Precision : Quantitative research allows for precise measurement and analysis of data, which can provide a more accurate understanding of phenomena and relationships between variables.
  • Efficiency : Quantitative research can be conducted relatively quickly and efficiently, especially when compared to qualitative research, which may involve lengthy data collection and analysis.
  • Large sample sizes : Quantitative research can accommodate large sample sizes, which can increase the representativeness and generalizability of the results.

Limitations of Quantitative Research

There are several limitations of quantitative research, including:

  • Limited understanding of context: Quantitative research typically focuses on numerical data and statistical analysis, which may not provide a comprehensive understanding of the context or underlying factors that influence a phenomenon.
  • Simplification of complex phenomena: Quantitative research often involves simplifying complex phenomena into measurable variables, which may not capture the full complexity of the phenomenon being studied.
  • Potential for researcher bias: Although quantitative research aims to be objective, there is still the potential for researcher bias in areas such as sampling, data collection, and data analysis.
  • Limited ability to explore new ideas: Quantitative research is often based on pre-determined research questions and hypotheses, which may limit the ability to explore new ideas or unexpected findings.
  • Limited ability to capture subjective experiences : Quantitative research is typically focused on objective data and may not capture the subjective experiences of individuals or groups being studied.
  • Ethical concerns : Quantitative research may raise ethical concerns, such as invasion of privacy or the potential for harm to participants.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Phenomenology

Phenomenology – Methods, Examples and Guide

Research Methods

Research Methods – Types, Examples and Guide

Mixed Research methods

Mixed Methods Research – Types & Analysis

Correlational Research Design

Correlational Research – Methods, Types and...

Basic Research

Basic Research – Types, Methods and Examples

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 02 September 2024

Weaving equity into infrastructure resilience research: a decadal review and future directions

  • Natalie Coleman 1 ,
  • Xiangpeng Li 1 ,
  • Tina Comes 2 &
  • Ali Mostafavi 1  

npj Natural Hazards volume  1 , Article number:  25 ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Natural hazards
  • Sustainability

Infrastructure resilience plays an important role in mitigating the negative impacts of natural hazards by ensuring the continued accessibility and availability of resources. Increasingly, equity is recognized as essential for infrastructure resilience. Yet, after about a decade of research on equity in infrastructure resilience, what is missing is a systematic overview of the state of the art and a research agenda across different infrastructures and hazards. To address this gap, this paper presents a systematic review of equity literature on infrastructure resilience in relation to natural hazard events. In our systematic review of 99 studies, we followed an 8-dimensional assessment framework that recognizes 4 equity definitions including distributional-demographic, distributional-spatial, procedural, and capacity equity. Significant findings show that (1) the majority of studies found were located in the US, (2) interest in equity in infrastructure resilience has been exponentially rising, (3) most data collection methods used descriptive and open-data, particularly with none of the non-US studies using human mobility data, (4) limited quantitative studies used non-linear analysis such as agent-based modeling and gravity networks, (5) distributional equity is mostly studied through disruptions in power, water, and transportation caused by flooding and tropical cyclones, and (6) other equity aspects, such as procedural equity, remain understudied. We propose that future research directions could quantify the social costs of infrastructure resilience and advocate a better integration of equity into resilience decision-making. This study fills a critical gap in how equity considerations can be integrated into infrastructure resilience against natural hazards, providing a comprehensive overview of the field and developing future research directions to enhance societal outcomes during and after disasters. As such, this paper is meant to inform and inspire researchers, engineers, and community leaders to understand the equity implications of their work and to embed equity at the heart of infrastructure resilience plans.

Similar content being viewed by others

what is quantitative research in social work

Measuring accessibility to public services and infrastructure criticality for disasters risk management

what is quantitative research in social work

Realizing resilience for decision-making

what is quantitative research in social work

Exploring disaster impacts on adaptation actions in 549 cities worldwide

Introduction.

Infrastructures are the backbones of our societies, connecting people to essential resources and services. At the same time, infrastructure systems such as power, water, and transportation play a pivotal role in determining whether a natural hazard event escalates into a disaster 1 . Driven by the combination of accelerating climate hazards and increasing vulnerability, a 2022 Reuters report indicated that natural hazards caused infrastructure and building losses between $732 and $845 billion dollars internationally 2 . In another report by the World Bank (2019), the direct damage to power and transportation systems had an estimated cost of $18 billion annually 3 . Not only do infrastructure disruptions result in economic losses but they also lead to health issues and a decline in quality of life 4 . Since infrastructure systems secure the accessibility and availability of water, health, and electricity, among other critical services, disruptions of infrastructure exacerbate disasters. For example, the Nepal earthquake (2015) caused the collapse of 262 micro-hydropower plants and 104 hospitals, which further weakened the community’s ability to recover from the hazardous event 5 . Hurricane Maria (2017) in Puerto Rico led to year-long power disruptions which contributed to the 2975 estimated human fatalities 6 . Therefore, infrastructure resilience is becoming increasingly prominent in research, policy, and practice.

The National Infrastructure Advisory Council defined infrastructure resilience as the ability of infrastructure systems, to absorb, adapt, or recover from disruptive events such as natural hazards 7 , 8 . From an engineering viewpoint, infrastructure resilience ensures no significant degradation or loss of system performance in case of a shock (robustness), establishes multiple access channels to infrastructure services (redundancy), effectively mobilizes resources and adapts to new conditions (resourcefulness), and accomplishes these goals in a timely manner (rapidity) 9 . From these origins, infrastructure resilience has evolved to include the complex interactions of technology, policy, social, and governance structures 10 . The United Nations Office for Disaster Risk Reduction discusses the need to use transdisciplinary and systemic methods to guide infrastructure resilience 11 . In their Principles of Resilient Infrastructure report, the principles of infrastructure resilience are to develop understanding and insights (continual learning), prepare for current and future hazards (proactively protected), positively work with the natural environment (environmentally integrated), develop participation across all levels of society (socially engaged), share information and expertise for coordinated benefits (shared responsibility), and address changing needs in infrastructure operations (adaptively transforming) 12 .

Based on the argument of Schlor et al. 13 that “social equity is essential for an urban resilience concept,” we also argue that equity in infrastructure resilience will not only benefit vulnerable populations but also lead to more resilient communities. Equity, in a broad sense, refers to the impartial distribution and just accessibility of resources, opportunities, and outcomes, which strive for fairness regardless of location and social group 14 , 15 . Equity in infrastructure resilience ensures that everyone in the community, regardless of their demographic background, geographic location, level of community status, and internal capabilities, have access to and benefits from infrastructure services. It would also address the limitations of infrastructure resilience, which brings short-term benefits to a specific group of people but ultimately results in long-term disaster impacts 16 . A failure to recognize equity in infrastructure resilience could exacerbate the disaster impact and lock in recovery processes, which in turn, reduces future resilience and leads to a vicious cycle 17 .

Even though infrastructure resilience has important equity impacts, the traditional definition of infrastructure resilience is antithetical to equity. Socially vulnerable populations (such as lower income, minority, indigenous, or rural populations) have traditionally been excluded from the development, maintenance, and planning of infrastructure resilience 18 . For instance, resilience strategies do not conventionally consider the unique needs and vulnerabilities of different communities, leading to inadequate one-size-fits-all solutions 19 . Conventional approaches to restoring infrastructure after hazard events are based on the number of outages, the number of affected customers, and extent of damage within an area, depending on the company preferences, and rarely prioritize the inherent vulnerability of affected individuals and areas 20 . Thereby, those who are most dependent on infrastructure systems may also be most affected by their outages. Several reports, such as National Institute of Standards and Technology 21 , United Nations Office for Project Services 11 , United Nations Office for Disaster Risk Reduction and Coalition for Disaster Resilient Infrastructure 22 , and the Natural Hazards Engineering Research Infrastructure 23 have recognized the importance of considering vulnerable populations in infrastructure resilience.

Furthermore, infrastructure resilience efforts often require significant investment at individual, community, and societal levels 24 . For instance, lower income households may not be able to afford power generators or water tanks to replace system losses 25 , 26 , which means they are more dependent on public infrastructure systems. Wealthier communities may receive more funding and resources for resilience projects due to better political representation and economic importance 27 . Improvements in infrastructure can also lead to gentrification and displacement, as an area perceived with increased safety may raise property values and push out underrepresented residents 28 . Infrastructure resilience may not be properly communicated or usable for all members of the community 29 . Research has also shown an association between vulnerable groups facing more intense losses and longer restoration periods of infrastructure disruptions due to planning biases, inadequate maintenance, and governance structures 18 . Due to the limited tools that translate equity considerations, infrastructure managers, owners, and operators are unlikely to recognize inequities in service provision 20 . Finally, resilience planning can prioritize rapid recovery which may not allow for sufficient time to address the underlying social inequities. This form of resilience planning overlooks the range of systematic disparities evident in infrastructure planning, management, operations, and maintenance in normal times and hazardous conditions 18 .

The field of equity in infrastructure resilience has sparked increasing interest over the last decade. First, researchers have distinguished equal and equitable treatment for infrastructure resilience. As stated by Kim and Sutley 30 , equality creates equivalence at the beginning of a process whereas equity seeks equivalence at the end. Second, the term has been interpreted through other social-economic concepts such as social justice 16 , sustainability 31 , vulnerability 32 , welfare 33 , 34 , and environmental justice 35 . Third, equitable infrastructure is frequently associated with pre-existing inequities such as demographic features 36 , 37 , spatial clusters 38 , 39 , 40 , and political processes 41 . Fourth, studies have proposed frameworks to analyze the relationship of equity in infrastructure resilience 42 , 43 , adapted quantitative and qualitative approaches 44 , 45 , and created decision-making tools for equity in infrastructure resilience 31 , 46 .

Despite a decade of increasing interest in integrating equity into infrastructure resilience, the research gap is to systematically evaluate collective research progress and fundamental knowledge. To address this gap, this paper presents a comprehensive systematic literature review of equity-related literature in the field of infrastructure resilience during natural hazards. The aim is to provide a thorough overview of the current state of art by synthesizing the growing body of literature of equitable thinking and academic research in infrastructure resilience. From there, we aim to identify gaps and establish a research agenda. This review focuses on the intersection of natural hazard events, infrastructure resilience, and equity to answer three overarching research questions. As such, this research is important because it explores the critical but often neglected integration of equity into infrastructure resilience against natural hazards. It provides a comprehensive overview and identifies future research opportunities to improve societal outcomes during and after disasters.

What are the prevailing concepts, foci, methods, and theories in assessing the inequities of infrastructure services in association with natural hazard events?

What are the similarities and differences in studying pathways of equity in infrastructure resilience?

What are the current gaps of knowledge and future challenges of studying equity in infrastructure resilience?

To answer the research questions, the authors reviewed 99 studies and developed an 8-dimensional assessment framework to understand in which contexts and via which methods equity is studied. To differentiate between different equity conceptualizations, the review distinguishes four definitions of equity: distributional-demographic (D), distributional-spatial (S), procedural (P), and capacity (C). In our study, “pathways” explores the formation, examination, and application of equity within an 8-dimensional framework. Following Meerow’s framework of resilience to what and of what? 47 , we then analyze for which infrastructures and hazards equity is studied. Infrastructures include power, water, transportation, communication, health, food, sanitation, stormwater, emergency, and general if a specific infrastructure is not mentioned. Green infrastructure, social infrastructure, building structures, and industrial structures were excluded. The hazards studied include flood, tropical cyclone, drought, earthquake, extreme temperature, pandemic, and general if there is no specific hazard.

The in-depth decadal review aims to bring insights into what aspects are fully known, partially understood, or completely missing in the conversation involving equity, infrastructure resilience, and disasters. The review will advance the academic understanding of equity in infrastructure resilience by highlighting understudied areas, recognizing the newest methodologies, and advising future research directions. Building on fundamental knowledge can influence practical applications. Engineers and utility managers can use these findings to better understand potential gaps in the current approaches and practices that may lead to inequitable outcomes. Community leaders and advocates could also leverage such evidence-based insights for advocacy and bring attention to equity concerns in infrastructure resilience policies and guidelines.

Infrastructure resilience in the broader resilience debate

To establish links across the resilience fields, this section embeds infrastructure resilience into the broader resilience debate including general systems resilience, ecological resilience, social resilience, physical infrastructure resilience, and equity in infrastructure resilience. From the variety of literature in different disciplines, we focus on the definitions of resilience and draw out the applicability to infrastructure systems.

Resilience has initially been explored in ecological systems. Holling 48 defines resilience as the ability of ecosystems to absorb changes and maintain their core functionality. This perspective recognizes that ecosystems do not necessarily return to a single equilibrium state, but can exist in multiple steady states, each with distinct thresholds and tipping points. Building on these concepts, Carpenter et al. 49 assesses the capacity of socioecological systems to withstand disturbances without transitioning to alternative states. The research compares resilience properties in lake districts and rangelands such as the dependence on slow-changing variables, self-organization capabilities, and adaptive capacity. These concepts enrich our understanding of infrastructure resilience by acknowledging the complex interdependencies between natural and built systems. It also points out the different temporal rhythms across fast-paced behavioral and slow-paced ecological and infrastructural change 50 .

Social resilience brings the human and behavioral dimension to the foreground. Aldrich and Meyer focuses on the concept of social capital in defining community resilience by emphasizing the role of social networks and relationships to enhance a community’s ability to withstand and recover from disasters 51 . Aldrich and Meyer argues that social infrastructure is as important as physical infrastructure in disaster resilience. Particularly, the depth and quality of social networks can provide crucial support in times of crisis, facilitate information sharing, expedite resource allocation, and coordinate recovery efforts. Resilience, in this context, is defined as the enhancement and utilization of its social infrastructure through social capital. It revolves around the collective capacity of communities to manage stressors and return to normalcy post-disaster through cooperative efforts.

Since community resilience relies on collaborative networks, which in turn are driven by accessibility, community and social resilience are intricately linked to functioning infrastructures 52 . To understand the relationships, we first examine the systems of systems approach thinking. Vitae Systems of Systems aims to holistically resolve complex environmental and societal challenges 53 . It emphasizes strategic, adaptive, and interconnected solutions crucial for long-term system resilience. Individual systems, each with their capabilities and purposes, are connected in ways such that they can achieve together what they cannot achieve alone. Additionally, Okada 54 also shows how the Vitae Systems of Systems can detect fundamental areas of concern and hotspots of vulnerability. It highlights principles of survivability (live through), vitality (live lively), and conviviality (live together) to build system capacity in the overall community. In the context of infrastructure resilience, these approaches bring context to the development of systems and their interdependencies, rather than focusing on the resilience of individual components in isolation.

Expanding on the notion of social and community resilience, Hay’s applies key concepts of being adaptable and capable of maintaining critical functionalities during disruptions to infrastructure 55 . This perspective introduces the concept of “safe-to-fail” systems, which suggests that planning for resilience should anticipate and accommodate the potential for system failures in a way that minimizes overall disruption and aids quick recovery.

As such, the literature agrees that social, infrastructural, and environmental systems handle unexpected disturbances and continue to provide essential services. While Aldrich’s contribution lies in underscoring the importance of social ties and community networks, Hay expands this into the realm of physical systems by considering access to facilities. Infrastructure systems traditionally adapt and change slowly, driven by rigid physical structures, high construction costs, and planning regulations. In contrast, behavioral patterns are relatively fast-changing, even though close social connections and trust also take time to build. Yet, infrastructures form the backbone that enables—or disrupts—social ties. By adopting resilience principles that enable adaptation across infrastructure and social systems, better preparedness, response, and recovery can be achieved.

Given the dynamic, complex nature of resilience, infrastructure resilience, by extension, should not just be considered through the effective engineering of the built environment. Rather, infrastructure resilience must be considered as an integral part of the multi-layered resilience landscape. Crucial questions that link infrastructure to the broader resilience debate include: How will it be used and by whom? How are infrastructure resilience decisions taken, and whose voices are prioritized? These critical questions necessitate the integration of equity perspectives into the infrastructure resilience discourse.

Equity in infrastructure resilience ensures all community members have equitable access to essential services and infrastructure. In her commentary paper, Cutter 56 examines disaster resilience and vulnerability, challenging the prevalent ambiguity in the definitions of resilience. The paper poses two fundamental questions of “resilience to what?” and “resilience to whom?” . Later, Meerow and Newell 47 expanded on these questions in the context of urban resilience, “for whom, what, where, and why?” . They also stress the need for “resilience politics,” which include understanding of how power dynamics shape resilience policies, creating winners and losers 47 .

In a nutshell, resilience strategies must proactively address systemic inequities. This can also be framed around the concept of Rawls’ Theory of Justice principles, such as equal basic rights and fair equality of opportunity 57 , 58 . Rawls advocates for structuring social and economic inequalities to benefit the least advantaged members of society. In the context of infrastructure resilience, the theory would ensure vulnerable communities, such as lower-income households, have priority in infrastructure restoration. Incorporating Walker’s Theory of Abundant Access, this could also mean prioritizing those most dependent on public transit. Access to public transit, especially in lower-income brackets, allows for greater freedom of movement and connection to other essential facilities in the community like water, food, and health 59 , 60 . At the same time, Casali et al. 61 show that access to infrastructures alone is not sufficient for urban resilience to emerge. Such perspectives integrate physical and social elements of a community to equitably distribute infrastructure resilience benefits. Table 1 summarizes the selected definitions of resilience.

Definitions of equity

Equity in infrastructure resilience ensures that individuals have the same opportunity and access to infrastructure services regardless of differing demographics, spatial regions, involvement in the community, and internal capacity. Equity is a multifaceted concept that requires precise definitions to thoroughly assess and address it within the scope of infrastructure resilience. Based on the literature, our systematic literature review proposes four definitions of equity for infrastructure resilience: distributional-demographic (D), distributional-spatial (S), procedural (P), and capacity (C). Distributional-demographic (D) equity represents accessibility to and functionality of infrastructure services considering the vulnerability of demographic groups 62 . Distributional-spatial (S) equity focuses on the equitable distribution of infrastructure services to all spatial regions 63 . Procedural (P) equity refers to inclusive participation and transparent planning with stakeholders and community members 31 . Capacity equity (C) connect the supporting infrastructure to the hierarchy of needs which recognizes the specific capacities of households 64 .

Distributional-demographic (D) addresses the systemic inequities in communities to ensure those of differing demographic status have equitable access to infrastructure services 37 . The purpose is to equitably distribute the burdens and benefits of services by reducing disparity for the most disadvantaged populations 42 . These groups may need greater support due to greater hardship to infrastructure losses, greater dependency on essential services, and disproportionate losses to infrastructure 43 , 65 , 66 . In addition, they may have differing abilities and need to mitigate service losses 33 . Our research bases distributional-demographic on age for young children and elderly, employment, education, ethnicity, people with disabilities, gender, income, tenure of residence, marginalized populations based on additional demographic characteristics, intergenerational, and general-social inequities 67 .

Distributional-spatial (S) recognizes that the operation and optimizations of the systems may leave certain areas in isolation 68 , 69 , 70 . For example, an equitable access to essential services (EAE) approach to spatial planning can identify these service deserts 46 . Urban and rural dynamics may also influence infrastructure inequities. Rural areas have deficient funding sources compared to urban areas 17 while urban areas may have greater vulnerability due to the interconnectedness of systems 71 . Our research labels distributional-spatial as spatial and urban-rural. Spatial involves spatial areas of extreme vulnerability through spatial regression models, spatial inequity hotspots, and specific mentions of vulnerable areas. Urban-rural references the struggles of urban-rural areas.

Procedural (P) equity ensures the inclusion of everyone in the decision-making process from the collection of data to the influence of policies. According to Rivera 72 , inequities in the disaster recovery and reconstruction process originate from procedural vulnerabilities associated with historical and ongoing power relations. The validity of local cultural identities is often overlooked in the participation process of designing infrastructure 73 . Governments and institutions may have excluded certain groups from the conversation to understand, plan, manage, and diminish risk in infrastructure 74 . As argued by Liévanos and Horne 20 , such utilitarian bureaucratic decision rules can limit the recognition of unequal services and the development of corrective actions. These biases can be present in governmental policies, maintenance orders, building codes, and distribution of funding 30 . Our research labels procedural equity as stakeholder input and stakeholder engagement. Stakeholder input goes beyond collecting responses from interviews and surveys. Rather, researchers will ask for specific feedback and validation on final research deliverables like models, results, and spatial maps, but they are not included in the research planning process. Stakeholder engagement are instances where participants took an active role in the research deliverables to change elements of their community.

Capacity (C) equity is the ability of individuals, groups, and communities to counteract or mitigate the effect of infrastructure loss. As mentioned by Parsons, et al. 75 , equity can be enhanced through a network of adaptive capacities at the household or community level. These adaptive capacities are viewed as an integral part of community resilience 76 . Regarding infrastructure, households can prepare for infrastructure losses and have service substitutes such as power generators or water storage tanks 77 , 78 . It may also include the household’s ability to tolerate disruptions and the ability to perceive risk to infrastructure losses 66 . However, capacity can be limited by people’s social connections, social standing, and access to financial resources and personal capital 79 . Our research categorizes capacity equity as adaptations, access, and susceptibility. Adaptations include preparedness strategies before a disaster as well as coping strategies during and after the disaster. Access includes a quantifiable metric in reaching critical resources which may include but is not limited to vehicles, public transportation, or walking. Susceptibility involves a household internal household capability such as tolerance, suffering, unhappiness, and willingness-to-pay models. Although an important aspect of capability, the research did not include social capital since it is outside the scope of research.

Methods of systematic literature review

Our systematic literature review used the Covidence software 80 , which is a production tool to make the process of conducting systematic reviews more efficient and streamlined 80 . As a web-based platform, it supports the collaborative management of uploaded journal references and processes journals through 4-step screening and analysis including title and abstract screening, full-text screening, data abstraction, and quality assessment. The software also follows the guidelines of PRIMSA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis), which provides a clear, transparent way for researchers to document their findings 81 . PRIMSA includes a 27-item checklist and 4-phase flow diagram of identification, screening, eligibility, and inclusion. Figure 1 summarizes the PRIMSA method we followed during our review process by showing the search criteria and final selected articles at each stage, including identification, screening, eligibility, and inclusion.

figure 1

The figure shows the 4-step screening process of identification, screening, eligibility, and inclusion as well as the specific search criteria for each step. From the initial 2991 articles, 99 articles were selected.

Identification

The search covered Web of Science and Science Direct due to their comprehensive coverage and interdisciplinary sources. To cover a broad set of possible disasters and infrastructures, our search focused on the key areas of equity (“equit- OR fair- OR justice- OR and access-“), infrastructure (“AND infrastructure system- OR service-”), and disasters (“ AND hazard- OR, cris- OR, disaster- OR”). We limited our search to journal articles published in engineering, social sciences, and interdisciplinary journals during January 2010 to March 2023. Excluding duplicates, the combined results of the search engines resulted in 2991 articles.

The articles were screened on their title and abstract. These had to explicitly mention both an infrastructure system (water, transportation, communication, etc.) and natural hazards (tropical cyclone, earthquake, etc.) The specific criteria for infrastructure and natural hazard is found in the 8-dimension framework. This initial screening process yielded 398 articles for full-text review.

Eligibility

The articles were examined based on the extent of discussion in infrastructure, natural hazard, and equity dimension. Insufficient equity discussion means that the paper did not fall within the distributional-demographic, distributional-spatial, procedural, or capacity forms of equity (98). Studies were also excluded for not directly including equity analysis in the infrastructure system (19). Limited infrastructure focus means that the article may have focused on infrastructure outside the scope of the manuscript such as industrial, green, building, or social infrastructure (74). Limited disaster focus means that the article did not connect to the direct or indirect impacts of disasters on infrastructure systems (45). Wrong study design included literature reviews, opinion pieces, policy papers, and unable to access (56). This stage yielded 99 final articles.

Inclusion and assessment framework

To analyze the 99 articles, we designed an 8-dimensional assessment framework (see Fig. 2 ) to analyze the literature. In Fig. 2 , the visualization focuses on equity, infrastructure, and natural hazards since these are the 3 main dimensions of the systematic literature review. The icons on the bottom are the remaining 5 dimensions which add more analysis and context to the first 3 dimensions. Here, we refer to research question 1: what are the prevailing concepts, foci, methods, and theories, in assessing the inequities of disrupted infrastructure services? The framework distinguished the concepts (equity dimensions, infrastructure system, and natural hazard event), foci (geographical scale, geographic location, temporal scale), methods (nature of study and data collection), and theories (theoretical perspective) (Fig. 2 ). The following details each subquestion:

figure 2

Equity dimensions, infrastructure type, and hazard event type are the main 3 dimensions while geographical location, geographic scale, temporal, nature of the study, and theoretical perspectives are the remaining 5 dimensions which add more information and context.

How is equity conceptualized and measured? First, we label equity into 4 definitions (DPSC). Second, it summarizes the equity conclusions.

Infrastructure type

Which infrastructure services were most and least commonly studied? This category is divided into power, water, transportation, communication, health, food, sanitation, stormwater, emergency, and general if a specific infrastructure is not mentioned. Studies can include more than one infrastructure service. Green infrastructure, social infrastructure, building structures, and industrial structures were excluded.

Hazard event type

Which hazard events are most or least frequently studied? This category includes flood, tropical cyclone, drought, earthquake, extreme temperature, pandemic, and general if there is no specific hazard. To clarify, tropical cyclones include hurricanes and typhoons while extreme temperatures are coldwaves and heatwaves. It determines which studies are specific to hazards and which can be applied to universal events.

Geographic location

Which countries have studied equity the most and least? This category is at the country scale such as the United States, Netherlands, China, and Australia, among others.

Geographic scale

What geographic unit of scale has been studied to represent equity? Smaller scales of study can reveal greater insights at the household level while larger scales of study can reveal comparative differences between regional communities. It ranges from individual, local, regional, and country as well as project. To clarify, ‘individual’ can include survey respondents, households, and stakeholder experts; ‘local’ is census block groups, census tracts, and ZIP codes equivalent scales; ‘regional’ is counties, municipalities, and cities equivalent; ‘project’ refers to studies that focused on specific infrastructure/ construction projects.

Temporal scale

When did themes and priority of equity first emerge? This category determines when equity in infrastructure research is published and whether these trends are increasing, decreasing, or constant.

Nature of the study

How is data for equity being collected and processed? This category analyzed data types used including conceptual, descriptive, open-data, location-intelligence, and simulation data. To clarify, conceptual refers to purely conceptual frameworks or hypothetical datasets; descriptive refers to surveys, questionnaires, interviews, or field observations performed by the researcher; open-data refers to any open-data source that is easily and freely attainable such as census and flood data; location-intelligence refers to social media, human mobility, satellite and aerial images, visit data, and GIS layers; and finally, simulation data can be developed through simulation models like numerical software, Monte-Carlo, or percolation methods. Second, the data can be processed through quantitative or qualitative methods. Quantitative methods may include correlation, principal component analysis, and spatial regression while qualitative methods may include validation, thematic coding, participatory rural appraisal, and citizen science. We focused on analysis explicitly mentioned in the manuscript. For example, it can be assumed that studies of linear regression discussed correlation analysis and other descriptive statistics in their data processing.

Theoretical perspective

Which theoretical frameworks have been created and used to evaluate equity? This category summarizes the reasoning behind the theoretical frameworks which may have informal or formal names such as a service-gap model, well-being approach, and capability approach.

Based on the 8-dimensional assessment framework, the research first examines the spatiotemporal patterns as well as data and methods to evaluate equity. Then, it investigates the definitions of equity to the intersections with infrastructure and hazards. It concludes with a discussion of theoretical frameworks. We use the term “pathways” to identify how equity is constructed, analyzed, and used in relation to the 8-dimensional framework. For instance, the connection between equity and infrastructure is considered a pathway. By defining specific “pathways,” we are essentially mapping out the routes through which equity interacts with various dimensions of a framework, such as infrastructure. The following analysis directly addresses research question 1 (prevailing concepts, focuses, methods, and theories, in assessing the inequities of disrupted infrastructure services) and research question 2 (similar and different pathways of equity). Supplementary Figures 1A – 12A provide additional context to the research findings and can be found in the Supplementary Information .

Spatiotemporal patterns of equity

Overall, there is an increasing number of publications about equity in infrastructure management (Fig. 3 ). A slight decrease observed in 2021 could be because of the focus on COVID-19 research. Spatially, by far the most studies focus on the US (69), followed by India (3), Ghana (3), and Bangladesh (3) (Fig. 5 ). This surprising distribution seems to contradict the intuition that equity and fairness in infrastructure resilience are certainly global phenonmena. Besides the exact phrasing of the search term, this result can be explained by the focus of this review on the intersection of infrastructure resilience and inequity. For infrastructure resilience, prominent reports, such as the CDRI’s 2023 Global Infrastructure Resilience Report 82 still fail to address it. Even though research has called for increasing consideration of equity and distributive justice in infrastructure and risk assessment, inequity is still all too often viewed as a social and economic risk 83 . At the same time, persistent imbalances in terms of data availability have been shown to shift research interest to the US, especially for data intense studies on urban infrastructures 84 . Finally, efforts to mainstream of equity and fairness across all infrastructures as a part of major transitions may explain why equity discussion is less pronounced in the context of crises. For instance, in Europe, according to the EU climate act (Article 9(1)) 85 , all sectors need to be enabled and empowered to make the transition to a climate-resilient society fair and equitable .

figure 3

The bar graph shows an overall increasing from 2011 to 2023 in publications about equity in infrastructure resilience during natural hazard events. The pie chart shows that countries in the global north with United States (US), England, Australia, Germany, Taiwan, Norway, South Korea, and Japan and global south with Bangladesh, India, Ghana, Mexico, Mozambique, Brazil, Tanzania, Sri Lanka, Pakistan, Nigeria, Kenya, Nepal, Zimbabwe, Central Asia, and South Africa.

Data and methods to interpret equity

Our Sankey diagram (Fig. 4 ) sketches the distribution of data collection pathways which connects quantitative-qualitative data to data type to scale. Most studies start from quantitative data (120) with fewer using mixed (34) or qualitative (18) data. Quantitative studies use descriptive (58), open-data (50) location-intelligence (36), simulation (19), and conceptual (9). The most prominent spatial scale was local (66) which consisted of census tract, census block group, zip code, and equivalent spatial scale of analysis. This was followed by individual or household scale (64) which largely stems from descriptive data of interviews, surveys, and field observations. Within the context of infrastructure, equity, and hazards, non-US studies did not use human mobility data, a specific type of location-intelligence data. This could be due to limitations in data availability and different security restrictions to these researchers such as the European Union’s General Data Protection Regulation 86 . Increasingly, the application of location-intelligence data was used to supplement the understanding of service disruptions. For example, satellite information 87 , telemetry-based data 37 , and human mobility data 88 were used to evaluate the equitable restoration of power systems and access to critical facilities. Social media quantified public emotions to disruptions 89 , 90 .

figure 4

The Sankey diagram shows the flow from studies containing quantitative, qualitative, or quantitative–qualitative data to the specific type of data of descriptive, open-data, location-intelligence, simulation, and conceptual to spatial scale of data of local, individual, regional, country, and project.

As shown in Fig. 5 , there are distinct quantitative and qualitative methods to interpret equity. Most quantitative methods were focused on descriptive analysis and linear models which can assume simple relationships within equity dimensions. Simple relationships would assume that dependent variables have a straightforward relationship with independent variables. Regarding quantitative analysis, descriptive statistics were correlation (12), chi-square (6), and analysis of variance (ANOVA) (5) means. Spatial analysis included geographic information system (GIS) (15), Moran’s-I spatial autocorrelation (6), and spatial-regression (5). Variables were also grouped together through principal component analysis (PCA) (9) and Index-Weighting (9). Logit models (13) and Monte-Carlo simulations (7) were used to analyze data. Thus, more complex models are needed to uncover the underlying mechanisms associated with equity in infrastructure. In analyzing quantitative data, most research has focused on using descriptive statistics, linear models, and Moran’s I statistic which have been effective in pinpointing areas with heightened physical and social vulnerability 25 , 91 , 92 .

figure 5

The quantitative pie chart has geographic information system (GIS), logit model, correlation, index-weighting, principal component analysis (PCA), monte-carlo simulation, chi-square, Moran’s- I spatial autocorrelation, analysis of variance (ANOVA), and spatial regression. The qualitative pie chart has validation, thematic coding, citizen science, sentiment analysis, conceptual analysis, participatory rural appraisal, document analysis, participatory assessment, photovoice, and ethnographic.

However, there has been a less frequent yet insightful use of advanced techniques like machine learning, agent-based modeling, and simulation. For example, Esmalian, et al. 66 employed agent-based modeling to explore how social demographic characteristics impact responses to power outages during Hurricane Harvey. In a similar vein, Baeza, et al. 93 utilized agent-based modeling to evaluate the trade-offs among three distinct infrastructure investment policies: prioritizing high-social-pressure neighborhoods, creating new access in under-served areas, and refurbishing aged infrastructure. Simulation models have been instrumental in understanding access to critical services like water 43 , health care 92 , and transportation 33 . Beyond these practical models, conceptual studies have also contributed innovative methods. Notably, Clark, et al. 94 proposed gravity-weighted models, and Kim and Sutley 30 explored the use of genetic algorithms to measure the accessibility to critical resources. These diverse methodologies indicate a growing sophistication in the field, embracing a range of analytical tools to address the complexities of infrastructure resilience.

Regarding qualitative analysis, the methods included thematic coding (7), validation of stakeholders (9), sentiment (4), citizen science (5), conceptual analysis (3) participatory rural appraisal (2), document analysis (2), participatory assessment (1), photovoice (1), and ethnographic (1). Qualitative methods were used to capture diverse angles of equity, offering a depth and context not provided by quantitative data alone. These methods are effective in understanding capacity equity, such as unexpected strategies and coping mechanisms that would go otherwise unnoticed 95 . Qualitative research can also capture the perspectives and voices of stakeholders through procedural equity. Interviews and focus groups can validate and enhance research frameworks 96 . Working collaboratively with stakeholders, as shown with Masterson et al. 97 can lead to positive community changes in updated planning policies. Qualitative methods can narratively convey the personal hardships of infrastructure losses 98 . This approach recognizes that infrastructure issues are not just technical problems but also deeply intertwined with social, economic, and cultural dimensions.

Interlinkages of equity definitions

As shown in Fig. 6 , the frequency of type of equity was distributional-demographic (90), distributional-spatial (55), capacity (54), and procedural (16). It is notable to reflect on the intersections between the four definitions of equity. Between two linkages, the top three linkages between DC (20), DS (16), and DP (9), which all revealed a connection to distributional-demographic equity. There were comparatively fewer studies linking 3 dimensions except for DSC which had 25 connections. Only 3 studies had 4 connections.

figure 6

Distributional-demographic had the highest number of studies and the greatest overlap with the remaining equity definitions of capacity, procedural, and distributional-spatial. Only 3 studies overlapped with the four equity definitions.

Distributional-demographic equity was the most studied equity definition. Table 2 shows how pathways of demographic equity relate to the different infrastructure systems and variables within distributional-demographic, including 728 unique pathways. As a reminder, pathways explore equity across an 8-dimensional framework. In this case, the distributional-demographic equity is connected to infrastructure, treating these connections as pathways Pathways with power (165), water (147), and transportation (112) were the most frequent while those with stormwater (23) and emergency (9) services were the least frequent. Referencing demographics, the most pathways were income (148), ethnicity (115), and age (122) while least studied were gender (63), employment (35), marginalized populations (5) and intergenerational (1). Note the abbreviations for Tables 2 and 3 are power (P), water (W), transportation (T), food (F), health (H), sanitation (ST), communication (C), stormwater (SW), emergency (E), and general (G). Regarding distributional-demographic, several research papers showed that lower income and minority households were most studied in comparison to the other demographic variables. Lower-income and minority households faced greater exposure, more hardship, and less tolerance to withstand power, water, transportation, and communication outages during Hurricane Harvey 99 . These findings were replicated in disasters such as Hurricane Florence, Hurricane Michael, COVID-19 pandemic, Winter Storm Uri, and Hurricane Hermine, respectively 65 , 91 , 100 , 101 . Several studies found that demographic vulnerabilities are interconnected and compounding, and often, distributional-demographic equity is a pre-existing inequality condition that is exacerbated by disaster impact 102 . For instance, Stough, et al. 98 identified that respondents with disabilities faced increased struggles due to a lack of resources to access proper healthcare and transportation after Hurricane Katrina. Women were often overburdened by infrastructure loss as they were expected to “pick up the pieces,” and substitute the missing service 103 , 104 . Fewer studies involved indigenous populations, young children, or considered future generations. Using citizen-science methods, Ahmed, et al. 105 studied the struggles and coping strategies of the Santal indigenous group to respond to water losses in drought conditions. Studies normally did not account for the direct infrastructure losses on children and instead concentrated on the impacts on their caretakers 106 ; however, this is likely due to restrictions surrounding research with children. Lee and Ellingwood 107 discussed how, “intergenerational discounting makes it possible to allocate costs and benefits more equitably between the current and future generations” (pg.51) A slight difference in discounting rate can lead to vastly different consequences and benefits for future generations. For example, the study found that insufficient investments in design and planning will only increase the cost and burden of infrastructure maintenance and replacement.

Distributional-spatial equity was the second most studied aspect, which includes spatial grouping and urban-rural designation, particularly given the rise of open-data and location-intelligence data with spatial information. Table 3 shows the pathways of spatial equity connected to different infrastructures and variables. In total, 109 unique pathways were found with spatial (83) and urban-rural (26) characteristics. Power (27), transportation (22), water (16), and health (15) systems were the most frequent pathways with stormwater (4), emergency (2), and communication (3) the least frequent. Urban-rural studies on communication and emergency services are entirely missing. Distributional-spatial equity studies, including spatial inequities and urban-rural dynamics, were often linked with distributional-demographic equity. For example, Logan and Guikema 46 defined “access rich” and “access poor” to measure different sociodemographic populations’ access to essential facilities. White populations had less distance to travel to open supermarkets and service stations in North Carolina 46 . Esmalian et al. 108 found that higher income areas had a lower number of stores in their areas, but they still had better access to grocery stores in Harris County, Texas. This could be because higher income areas live in residential areas, but they have the capability to travel further distances and visit more stores. Vulnerable communities could even be indirectly impacted by spatial spillover effects from neighboring areas 26 . Regarding urban-rural struggles, Pandey et al. 17 argued that inequities emerge when urban infrastructure growth lags with respect to the urban population while rural areas face infrastructure deficits. Rural municipalities had fewer resources, longer restoration times, and less institutional support to mitigate infrastructure losses 95 , 109 , 110 .

Capacity was the third most studied dimension and had 150 unique pathways to adaptations (54), access (43), and susceptibility (53). In connecting to infrastructure systems, power (29), water (27), transportation (25), and food (22) had the greatest number of pathways. There were interesting connections between different infrastructures and variables of capacity. Access was most connected to food (11), transportation (10), and health systems (10). Adaptations were most connected to water (15) and power (12) systems. This highlights how capacity equity is reflected differently to infrastructure losses. Capacity equity was often connected with distributional-equity since different sociodemographic groups have varying adaptations to infrastructure losses 78 . For example, Chakalian, et al. 106 found that white respondents were 2.5 more likely to own a power generator while Kohlitz et al. 95 found that poorer households could not afford rainwater harvesting systems. These behaviors may also include tolerating infrastructure disruptions 111 , cutting back on current resources 112 , or having an increased suffering 113 . The capabilities approach offers a valuable perspective on access to infrastructure services 94 . It recognizes the additional time and financial resources that certain groups may need to access the same level of services, especially if travel networks are disrupted 114 , 115 and travel time is extended 33 . In rural regions, women, children, and lower income households often reported traveling further distances for resources 105 , 116 . These disparities are often influenced by socioeconomic factors, emphasizing the need for a nuanced understanding on how different communities are affected by and respond to infrastructure losses. As such, building capacity is not just increasing the preparedness of households but also accommodating infrastructure systems to ensure equitable access, such as the optimization of facility locations 69 .

Procedural was the least studied equity definition with only 26 unique pathways, involving stakeholder input and stakeholder engagement. Pathways to communication and emergency systems were not available. The greatest number of pathways were water services to stakeholder input (7) and stormwater services to stakeholder engagement (4). Stakeholder input can assist researchers in validating and improving their research deliverables. This approach democratizes the decision-making process and enhances the quality and relevance of research and planning outcomes. For instance, the involvement of local experts and residents in Tanzania through a Delphi process led to the development of a more accurate and locally relevant social resilience measurement tool 117 . Stakeholder engagement, such as citizen science methods, can incorporate environmental justice communities into the planning process, educate engineers and scientists, and collect reliable data which can be actively incorporated back to the community 118 , 119 , 120 . Such participatory approaches, including citizen science, allow for a deeper understanding of community needs and challenges. In Houston, TX, the success of engaging high school students in assessing drainage infrastructure exemplified how community involvement can yield significant, practical data 119 . The data was approximately 74% accurate to trained inspectors, which were promising results for communities assessing their infrastructure resilience 119 . In a blend of research and practice, Masterson, et al. 97 illustrated the practical application of procedural equity. By interweaving equity in their policy planning, Rockport, TX planners added accessible services and upgrades to infrastructure for lower-income and racial-ethnic minority neighborhoods, directly benefiting underserved communities.

Pathways between equity, hazard, and infrastructure

For the hazards, tropical cyclones (34.6%) and floods (30.8%) make up over half of the studied hazards (Supplementary Figure 2A ) while power (21.2%), water (19.2%), transportation (15.4%), and health (12.0%) were the most frequently studied infrastructure services (Supplementary Figure 3A ). A pathway is used to connect equity to different dimensions of the framework, in this case, equity to infrastructure to hazard (Fig. 7 ). When considering these pathways, distributional-demographic (270) had the most pathways followed by capacity (175), distributional-spatial (140), and procedural (28). The most common pathway across all infrastructure services was a tropical cyclone and flooding with distributional-demographic equity (Supplementary Figures 6A – 8A ). As shown in Fig. 7 , tropical cyclone (229) and flood (192) had the most pathways while extreme temperatures (20) and pandemic (14) had the least. Although pandemic is seemingly the least studied, it is important to note that most of these studies were post COVID-19. Power (120), transportation (107), and water (104) had the most pathways whereas sanitation (33), communication (27), stormwater (21), and emergency (14) had the least pathways. The figure shows specific gaps in the literature. Whereas the other three equity definitions had connections to each hazard event, procedural equity only had connections to tropical cyclone, flood, general, and drought. There were only pathways from health infrastructure to tropical cyclone, flood, general, earthquake, and pandemic. There were 106 pathways connecting equity to general hazards, which may suggest the need to look at the impacts of specific hazards to equity in infrastructure resilience.

figure 7

The Sankey diagram shows the flow from the different types of equity, or equity definitions, of distributional-demographic (D), capacity (C), distributional-spatial (S), and procedural (P) to hazard of tropical cyclone, flood, general, drought, earthquake, extreme temperature, and pandemic to infrastructure of power, transportation, water, health, food, communication, general, stormwater, emergency, and sanitation.

Research frameworks

Regarding research question 2, this research aims to understand frameworks of equity in infrastructure resilience. As an exploration of the frameworks. we found common focus areas of adaptations, access, vulnerability, validation, and welfare economics (Table 4 ). The full list of frameworks can be found in the online database that was uploaded in DesignSafe Data Depot. Supplementary Information .

Adaptations

Household adaptations included the ability to prepare before a disaster as well as coping strategies during and after the disaster. Esmalian et al. 111 developed a service gap model based on survey data of residents affected by Hurricane Harvey. Lower-income households were less likely to own power generators, which could lead to an inability to withstand power outages 111 . To understand household adaptations, Abbou et al. 78 asked residents of Los Angeles, California about their experiences in electrical and water losses. The study showed that when compared to men, women used more candles and flashlights. People with higher education, regardless of gender, were more likely to use power generators. In a Pressure and Release model, Daramola et al. 112 examined the level of preparedness to natural hazards in Nigeria. The study found that rural residents tended to use rechargeable lamps while urban areas used generators, likely due to the limited availability of electricity systems. Approximately 73% of participants relied on chemist shops to cope with constrained access to health facilities.

Other frameworks focused on the accessibility to resources. Clark et al. 94 developed the social burden concept which uses resources, conversion factors, capabilities, and functioning into a travel cost method to access critical resources. In an integrated physical-social vulnerability model, Dong et al. 92 calculated disrupted access to hospitals in Harris County, Texas. Logan and Guikema 46 integrated spatial planning, diverse vulnerabilities, and community needs into EAE services. In the case study of Willimgton, North Carolina, they showed how lower-income households had fewer access to grocery stores. In a predictive recovery monitoring spatial model, Patrascu and Mostafavi 26 found that the percentage of Black and Asian subpopulations were significant features to predict recovery of population activity, or the visits to essential services in a community.

Vulnerability

Several of the infrastructure resilience frameworks were grounded in social vulnerability assessments. For instance, Toland et al. 43 created a community vulnerability assessment based on an earthquake scenario that resulted in the need for emergency food and water resources. Using GIS, Oswald and Mohammed developed a transportation justice threshold index that integrated social vulnerability into transportation understanding 121 . In a Disruption Tolerance Index, Esmalian et al. 25 showed how demographic variables are connected with disproportionate losses in power and transportation losses.

Additional studies were based on stakeholder input and expert opinion. Atallah et al. 36 established an ABCD roadmap for health services which included acute life-saving services, basic institutional aspects for low-resource settings, community-driven health initiatives, and disease specific interventions. Health experts were instrumental in providing feedback for the ABCD roadmap. Another example is the development of the social resilience tool for water systems validated by experts and community residents by Sweya et al. 117 . To assess highway resilience, Hsieh and Feng had transportation experts score 9 factors including resident population, income, employment, connectivity, dependency ratio, distance to hospital, number of substitutive links, delay time in substitutions, and average degenerated level of services 122 .

Welfare economics

Willingness-to-pay (WTP) models reveal varied household investments in infrastructure resilience. Wang et al. 123 showed a wide WTP range, from $15 to $50 for those unaffected by disruptions to $120–$775 for affected, politically liberal individuals. Islam et al. 124 found households with limited access to safe drinking water were more inclined to pay for resilient water infrastructure. Stock et al. 125 observed that higher-income households showed greater WTP for power and transportation resilience, likely due to more disposable income and expectations for service quality. These findings highlight the need to consider economic constraints in WTP studies to avoid misinterpreting lower income as lower willingness to invest. Indeed, if a study does not adequately account for a person’s economic constraints, the findings may incorrectly interpret a lower ability to pay as a lower willingness to pay.

In terms of policy evaluation for infrastructure resilience, studies like Ulak et al. 126 prioritized equitable power system recovery for different ethnic groups, favoring network renewal over increasing response crews. Baeza et al. 93 noted that infrastructure decisions are often swayed by political factors rather than technical criteria. Furthermore, Lee and Ellingwood 107 introduced a method for intergenerational discounting in civil infrastructure, suggesting more conservative designs for longer service lives to benefit future generations. These studies underscore the complex factors influencing infrastructure resilience policy, including equity, political influence, and long-term planning.

This systematic review is the first to explore how equity is incorporated into infrastructure resilience against natural hazards. By systematically analyzing the existing literature and identifying key gaps, the paper enhances our understanding of equity in this field and outlines clear directions for future research. This study is crucial for understanding the fundamental knowledge that brings social equity to the forefront of infrastructure resilience. Table 5 summarizes the primary findings of this systematic review of equity in infrastructure resilience literature, including what the studies are currently focusing on and the research gaps and limitations.

Our findings show a great diversity of frameworks and methods depending on the context, in which equity is applied (Table 5 ). Moreover, we identify a lack of integrative formal and analytical tools. Therefore, a clear and standard framework is needed to operationalize inequity across infrastructures and hazards; what is missing are analytical tools and approaches to integrate equity assessment into decision-making.

Referring to question 3, we will further explore the current gaps of knowledge and future challenges of studying equity in infrastructure resilience. In elaborating on the gaps identified in our review, we propose that the next era of research questions and objectives should be (1) monitoring equity performance with improved data, (2) weaving equity in computational models, and (3) integrating equity into decision-making tools. Through principles of innovation, accountability, and knowledge, such objectives would be guided by moving beyond distributional equity, recognizing understudied gaps of equity, and inclusion of all geographic regions, and by extension stakeholders (Fig. 8 ).

figure 8

The figure demonstrates that previous research has focused on detecting and finding evidence of disparity in infrastructure resilience in hazard events. It supports that the next phase of research will monitor equity performance with improved data, weave equity in computational models, and integrate equity in decision making tools in order to move beyond social and spatial distributions, recognize understudied gaps of equity, and include all geographic regions.

The first research direction is the monitoring equity performance with improved data at more granular scales and greater representation of impacted communities. Increased data availability provides researchers, stakeholders, and community residents with more detailed and accurate assessment of infrastructure losses. Many studies have used reliable, yet inherently approximate data sources, for infrastructure service outages. These sources include human mobility, satellite, points-of-interest visitation, and telemetry-based data (such as refs. 69 , 100 ). Private companies are often reluctant to share utility and outage data with researchers 127 . Thus, we encourage the shift towards transparent and open datasets from utility companies in normal times and outage events. This aligns with open-data initiatives such as Open Infrastructure Outage Data Initiative Nationwide (ODIN) 128 , Invest in Open Infrastructure 129 , and Implementing Act on a list of High-Value Datasets 130 . Transparency in data fosters an environment of accountability and innovation to uphold equity standards in infrastructure resilience 131 . An essential aspect of this transparency involves acknowledging and addressing biases that may render certain groups ‘invisible’ within datasets. These digitally invisible populations may well be among the most vulnerable, such as unhoused people that may not have a digital footprint yet are very vulnerable to extreme weather 132 . Gender serves as a poignant example of such invisibility. Historical biases and societal norms often result in gender disparities being perpetuated in various facets of infrastructure design and resilience planning 133 . Women are frequently placed in roles of caregiving responsibilities, such as traveling to reach water (as shown in refs. 105 , 116 , 134 ) or concern over the well-being of family members (as shown in refs. 103 , 135 ), which have been overlooked or marginalized in infrastructure planning processes.

If instances of social disparities are uncovered, researchers and practitioners could collaboratively cultivate evidence-based recommendations to manage infrastructure resilience. At the same time, approaches for responsible data management need to be developed that protect privacy of individuals, especially marginalized and vulnerable groups 136 . There is a trade-off between proper representation of demographic groups and ensuring the privacy of individuals 45 , 67 . Despite this, very few studies call into question the fairness of the data collection in capturing the multifaceted aspects of equity 137 , or the potential risks to communities as described in the EU’s forthcoming Artificial Intelligence Act 138 .

By extension, addressing the problem of digitally invisible populations and possible bias, Gharaibeh et al. 120 also emphasizes that equitable data should represent all communities in the study area. Choices about data collection and storage can directly impact the management of public services, by extension the management of critical information 139 . For example, a significant problem with location-intelligence data collection is properly representing digitally invisible populations as these groups are often marginalized in the digital space leading to gaps in data 132 , 140 . Human mobility data, a specific type of location-intelligence data derived from cell phone pinpoint data, illustrates this issue. Vulnerable groups may not afford or have frequent access to cell phones, resulting in a skewed understanding of population movements 141 . However, other studies have shown that digital platforms can be empowering for marginalized populations to express sentiments of cultural identity and tragedies through active sharing and communication 142 . Ultimately, Hendricks et al. 118 recommend a “triangulation of data sources,” to integrate quantitative and qualitative data, which would mitigate potential data misrepresentation and take advantage of the online information. Moving ahead, approaches need to be developed for fair, privacy-preserving, and unbiased data collection that empowers especially vulnerable communities. At the same time, realizing that data gaps especially in infrastructure-poor regions may not be easy to address, we also follow Casali et al. 84 in calling for synthetic approaches and models that work on sparse data.

Few studies, such as refs. 45 , 66 , have created computational models to capture equity-infrastructure-hazards interactions, which are initial attempts to quantify both the social impacts and the physical performance of infrastructure. This is echoed in the work of Soden et al. 143 which found only ~28% of studies undertake a quantitative evaluation of differential impacts experienced in disasters. To enhance analytical and computational methods in supporting equitable decision-making, it is imperative for future studies to comprehensively integrate social dimensions of infrastructure resilience. Therefore, the next research direction is the intentional weaving of equity in computational models. Where the majority of studies used descriptive statistics and non-linear modeling, complex computational models—such as agent-based simulations—offer the advantage of capturing the nonlinear interactions of equity in infrastructure systems. These tools also allow decision-makers to gain insights into the emergence of complex patterns over time. These simulation models can then be combined with specific metrics that measure infrastructural or social implications. Metrics might include susceptibility curves 144 , social burden costs estimates 94 , or social resilience assessment 76 . Novel metrics for assessing adaptive strategies, human behaviors, and disproportionate impacts (such as 113 ) could also be further quantified through empirical deprivation costs for infrastructure losses 145 . These metrics also are a stepping-stone for formalizing and integrating equity into decision-making tools.

Another research direction is the integration of equity into decision-making tools. Key performance indicators and monitoring systems are essential for clarifying equity processes and outcomes and creating tangible tools for infrastructure planners, managers, engineers, and policy-makers. In particular, the literature discussed the potential for using equity in infrastructure resilience to direct infrastructure investments (such as refs. 93 , 126 , 146 ). Infrastructure resilience requires significant upfront investment and resource allocations, which generally favors wealthier communities. Communities may hold social, cultural, and environmental values that are not properly quantified in infrastructure resilience 147 . Since traditional standards of cost-benefit analyses used by infrastructure managers and operators primarily focus on monetary gains or losses, they would not favorably support significant investments to mitigate the human impacts of infrastructure losses on those most vulnerable 148 . This limitation also delays investments and leads to inaction in infrastructure resilience, resulting in unnecessary loss of services and social harm, potentially amplifying inequities, and furthering societal fragmentation. To bridge this gap, we propose to measure the social costs of infrastructure service disruptions as a way to determine the broad benefits of resilience investments 147 .

As the literature review found, several studies are following a welfare economics approach to quantify social costs associated with infrastructure losses such as the evaluation of policies (such as ref. 93 ) and willingness-to-pay models (such as ref. 125 ). Such economic functions are preliminary steps in quantifying equity as a cost measure; however, these models must avoid misinterpreting lower income as a lower willingness to invest. Lee and Ellingwood 107 proposed using intergenerational discounting rate; however, it is important to recognize the flexibility of options for future generations 149 . Teodoro et al. 149 points to the challenges of using (fixed) discount rates and advocate for a procedural justice-based approach that maximizes flexibility and adaptability. Further research is needed to quantify the social costs of infrastructure disruptions and integrate them into infrastructure resilience assessments, such as calculating the deprivation costs of service losses for vulnerable populations.

Our review shows that certain demographic groups such as indigenous populations, persons with disabilities, and intergenerational equity issues have not been sufficiently studied 150 . This aligns with the conclusions of Seyedrezaei et al. 151 , who found that the majority of studies about equity in the built-environment focused on lower-income and minority households. Indigenous populations face significant geographical, cultural, and linguistic barriers that make their experiences with disrupted infrastructure services distinct from those of the broader population 152 .

Even though intergenerational justice issues have increasingly sparked attention on the climate change discussion, intergenerational equity issues in infrastructure resilience assessments have received limited attention. We argue that intergenerational equity warrants special attention as infrastructure systems have long life cycles that span across multiple generations, and ultimately the decisions on the finance, restoration, and new construction will have a significant impact on the ability of future generations to withstand the impact of stronger climate hazard events. Non-action may lead to tremendous costs in the long run 149 . It is the responsibility of current research to understand the long-term effects of equity in infrastructure management to mitigate future losses and maintain the flexibility of future generations. As a means of procedural justice, these generations should have the space to make choices, instead of being locked in by today’s decisions. Future studies should develop methods to measure and integrate intergenerational inequity in infrastructure resilience assessments.

Given the specific search criteria and focus on equity, infrastructure, and natural hazard, we found a major geographic focus on the United States. Large portions of the global north and global south were not included in the analysis. This could be due to the search criteria of the literature review; however, it is important to recognize potential geographic areas that are isolated from the academic studies on infrastructure resilience. Different infrastructure challenges (e.g., intermittent services) are present through data availability in the region. A dearth of studies on equitable infrastructure resilience could contribute to greater inequity in those regions due to the absence of empirical evidence and proper methodological solutions. This aligns with other findings on sustainable development goals and climate adaptation broadly 153 . Global research efforts, along with common data platforms, standards and methods (see above), that include international collaborations among researchers across the global north and global south regions can bridge this gap and expand the breadth of knowledge and solutions for equitable infrastructure resilience.

Finally, while significant attention has been paid to distributional demographic and spatial inequity issues 151 , there remain several underutilized definitions of equity. Procedural and capacity equity hold the greatest potential for people to feel more included in the infrastructure resilience process. Instead of depending directly on the infrastructure systems, individual households can adapt to disrupted periods through substituted services and alternative actions (such as ref. 78 ). To advance procedural equity in infrastructure resilience, citizen-science research or participatory studies can begin by empowering locals to understand and monitor their resilience (such as ref. 76 ) or failures in their infrastructure systems (such as ref. 120 ). As referenced by Masterson and Cooper 154 , the ladder of citizen power can serve as a framework for how to ethically engage with community partners for procedural equity. The ladder, originally developed by Arnstein 155 , includes non-participation, tokenism, and citizen power. Table 3 shows that most research falls into non-participation: survey data and information are extracted without any community guidance. Limited studies that have branched into community involvement still stay restricted in the tokenism step, such as models that are validated by stakeholders or receive expert opinions on their conceptual models. Future studies should expand inquiries regarding the procedural and capacity dimension of equity in infrastructure resilience assessments and management. For instance, research could map out where inequities occur in the decision-making process and targeted spatial regions as well as allocate of resources for infrastructure resilience. It could also continue pursuing inclusive methodologies such as participatory action research and co-design processes. It should investigate effective methods to genuinely integrate different stakeholders and community members from conception through evaluation of research.

Although the primary audience of the literature review is academic scholars and fellow researchers, the identified gaps are of importance for practitioners, governmental agencies, community organizations, and advocates. By harnessing the transformative power of equity, studies in infrastructure resilience can transcend its traditional role and develop equity-focused data, modeling, and decision-making tools which considers everyone in the community. The integration of equity aspects within the framework of infrastructure resilience not only enhances the resilience of infrastructure systems but also contributes to the creation of inclusive and resilient communities. Infrastructure resilience would not just be a shield against adversity but also a catalyst for positive social and environmental change.

Data availability

The created excel database which includes information on the key parts of the 8-dimensional equity framework will be uploaded to DesignSafe-CI.

Oh, E. H., Deshmukh, A. & Hastak, M. Criticality assessment of lifeline infrastructure for enhancing disaster response. Nat. Hazards Rev. 14 , 98–107 (2013).

Article   Google Scholar  

Tripathi, B., Thomson Reuters Foundation. in Reuters (2023).

Hallegatte, S., Rentschler, J. & Rozenberg, J. Lifelines: the resilience infrastructure opportunity (2019).

Scherzer, S., Lujala, P. & Rød, J. K. A community resilience index for Norway: an adaptation of the Baseline Resilience Indicators for Communities (BRIC). Int. J. Disaster Risk Reduct. 36 , 101107 (2019).

Platt, S., Gautam, D. & Rupakhety, R. Speed and quality of recovery after the Gorkha Earthquake 2015 Nepal. Int. J. Disaster Risk Reduct. 50 , 101689 (2020).

George Washington University Milken Institute School of Public Health & University of Puerto Rico Graduate School of Public Health. Ascertainment of the estimated excess mortality from hurricane Maria in Puerto Rico (2018).

National Infrastructure Advisory Council. Critical Infrastructure Resilience Final Report and Recommendations (2010).

Hosseini, S., Barker, K. & Ramirez-Marquez, J. E. A review of definitions and measures of system resilience. Reliab. Eng. Syst. Saf. 145 , 47–61 (2016).

Berkeley, A. & Wallace, M. A Framework for establishing critical infrastructure resilience goals final report and recommendations by the council (Cybersecurity and Infrastructure Security Agency, 2010).

Mehvar, S. et al. Towards resilient vital infrastructure systems–challenges, opportunities, and future research agenda. Nat. Hazards Earth Syst. Sci. 21 , 1383–1407 (2021).

United Nations Office for Project Services. Inclusive Infrastructure for Climate Action. (2022).

UN Office for Disaster Risk Reduction. Principles for Resilient Infrastructure. (2022).

Schlör, H., Venghaus, S. & Hake, J.-F. The FEW-Nexus city index—measuring urban resilience. Appl. Energy 210 , 382–392 (2018).

Hart, D. K. Social equity, justice, and the equitable administrator. Public Adm. Rev. 34 , 3 (1974).

Cook, K. S. & Hegtvedt, K. A. Distributive justice, equity, and equality. Annu. Rev. Sociol. 9 , 217–241 (1983).

Boakye, J., Guidotti, R., Gardoni, P. & Murphy, C. The role of transportation infrastructure on the impact of natural hazards on communities. Reliab. Eng. Syst. Saf. 219 , 108184 (2022).

Pandey, B., Brelsford, C. & Seto, K. C. Infrastructure inequality is a characteristic of urbanization. Proc. Natl Acad. Sci. 119 , e2119890119 (2022).

Article   CAS   Google Scholar  

Hendricks, M. D. & Van Zandt, S. Unequal protection revisited: planning for environmental justice, hazard vulnerability, and critical infrastructure in communities of color. Environ. Justice 14 , 87–97 (2021).

Ma, C., Qirui, C. & Lv, Y. “One community at a time”: promoting community resilience in the face of natural hazards and public health challenges. BMC Public Health 23 , 2510 (2023).

Liévanos, R. S. & Horne, C. Unequal resilience: the duration of electricity outages. Energy Policy 108 , 201–211 (2017).

National Institute of Standards and Technology. Community Resilience Planning Guide for Buildings and Infrastructure Systems (2020).

UN Office for Disaster Risk Reduction & Coalition for Disaster Resilient Infrastructure. Global Methodology for Infrastructure Resilience Review (2023).

Robertson, I. et al. Natural Hazards Engineering Research Infrastructure, Science Plan, Multi-Hazard Research to Make a More Resilient World, Third Edition, < https://doi.org/10.17603/ds2-abbs-0966 > (2023).

Rathnayaka, B., Siriwardana, C., Robert, D., Amaratunga, D. & Setunge, S. Improving the resilience of critical infrastructures: Evidence-based insights from a systematic literature review. Int. J. Disaster Risk Reduct. 78 , 103123 (2022).

Esmalian, A. et al. Disruption Tolerance Index for determining household susceptibility to infrastructure service disruptions. Int. J. Disaster Risk Reduct. https://doi.org/10.1016/J.IJDRR.2021.102347 (2021).

Patrascu, F. I. & Mostafavi, A. Spatial model for predictive recovery monitoring based on hazard, built environment, and population features and their spillover effects. Environ. Plan. B: Urban Anal. City Sci. 23998083231167433 https://doi.org/10.1177/23998083231167433 (2023).

Archer, D., Marome, W., Natakun, B., Mabangyang, P. & Phanthuwongpakdee, N. The role of collective and individual assets in building urban community resilience. Int. J. Urban Sustain. Dev. 12 , 169–186 (2020).

Anguelovski, I. et al. Equity impacts of urban land use planning for climate adaptation: critical perspectives from the Global North and South. J. Plan. Educ. Res. 36 , 333–348 (2016).

Hallegatte, S. & Li, J. Investing in resilience and making investments resilient. PLOS Clim. 1 , e0000077 (2022).

Kim, J. H. & Sutley, E. J. Implementation of social equity metrics in an engineering-based framework for distributing disaster resources. Int. J. Disaster Risk Reduct. 64 , 102485 (2021).

Seigerman, C. K. et al. Operationalizing equity for integrated water resources management. J. Am. Water Resourc. Assoc. 59 , 281–298 (2023).

Karakoc, D. B., Barker, K., Zobel, C. W. & Almoghathawi, Y. Social vulnerability and equity perspectives on interdependent infrastructure network component importance. Sustain. Cities Soc. 57 , 102072 (2020).

Silva-Lopez, R., Bhattacharjee, G., Poulos, A. & Baker, J. W. Commuter welfare-based probabilistic seismic risk assessment of regional road networks. Reliab. Eng. Syst. Saf. 227 , 108730 (2022).

Dhakal, S. & Zhang, L. A Social welfare-based infrastructure resilience assessment framework: toward equitable resilience for infrastructure development. Nat. Hazards Rev. 24 https://doi.org/10.1061/(ASCE)NH.1527-6996.0000597 (2023).

Sotolongo, M., Kuhl, L. & Baker, S. H. Using environmental justice to inform disaster recovery: vulnerability and electricity restoration in Puerto Rico. Environ. Sci. Policy 122 , 59–71 (2021).

Atallah, D. G. et al. Developing equitable primary health care in conflict-affected settings: expert perspectives from the frontlines. Qual. Health Res. 28 , 98–111 (2018).

Coleman, N. et al. Energy inequality in climate hazards: empirical evidence of social and spatial disparities in managed and hazard-induced power outages. Sustain. Cities Soc. 92 , 104491 (2023).

Balomenos, G. P., Hu, Y. J., Padgett, J. E. & Shelton, K. Impact of coastal hazards on residents’ spatial accessibility to health services. J. Infrastruct. Syst. 25 https://doi.org/10.1061/(ASCE)IS.1943-555X.0000509 (2019).

Wakhungu, M. J., Abdel-Mottaleb, N., Wells, E. C. & Zhang, Q. Geospatial vulnerability framework for identifying water infrastructure inequalities. J. Environ. Eng. 147 https://doi.org/10.1061/(ASCE)EE.1943-7870.0001903 (2021).

Abdel-Mooty, M. N., Yosri, A., El-Dakhakhni, W. & Coulibaly, P. Community flood resilience categorization framework. Int. J. Disaster Risk Reduct. 61 https://doi.org/10.1016/j.ijdrr.2021.102349 (2021).

Millington, N. Producing water scarcity in São Paulo, Brazil: The 2014-2015 water crisis and the binding politics of infrastructure. Political Geogr. 65 , 26–34 (2018).

Clark, L. P. et al. A data framework for assessing social inequality and equity in multi-sector social, ecological, infrastructural urban systems: focus on fine-spatial scales. J. Ind. Ecol. 26 , 145–163 (2022).

Toland, J. C., Wein, A. M., Wu, A.-M. & Spearing, L. A. A conceptual framework for estimation of initial emergency food and water resource requirements in disasters. Int. J. Disaster Risk Reduct. 90 , 103661 (2023).

Zhai, W., Peng, Z. R. & Yuan, F. Examine the effects of neighborhood equity on disaster situational awareness: harness machine learning and geotagged Twitter data. Int. J. Disaster Risk Reduct. 48 https://doi.org/10.1016/j.ijdrr.2020.101611 (2020).

Yuan, F. et al. Smart flood resilience: harnessing community-scale big data for predictive flood risk monitoring, rapid impact assessment, and situational awareness. Environ. Res.: Infrastruct. Sustain. 2 , 025006 (2022).

Google Scholar  

Logan, T. M. & Guikema, S. D. Reframing resilience: equitable access to essential services. Risk Anal. 40 , 1538–1553 (2020).

Meerow, S. & Newell, J. P. Urban resilience for whom, what, when, where, and why? Urban Geogr. 40 , 309–329 (2019).

Holling, C. S. Resilience and stability of ecological systems. Annu. Rev. Ecol. Syst. 4 , 1–23 (1973).

Carpenter, S., Walker, B., Anderies, J. M. & Abel, N. From metaphor to measurement: resilience of what to what? Ecosystems 4 , 765–781 (2001).

Krishnan, S., Aydin, N. Y. & Comes, T. TIMEWISE: temporal dynamics for urban resilience—theoretical insights and empirical reflections from Amsterdam and Mumbai. npj Urban Sustain. 4 , 4 (2024).

Aldrich, D. P. & Meyer, M. A. Social capital and community resilience. Am. Behav. Sci. 59 , 254–269 (2014).

Choi, J., Deshmukh, A. & Hastak, M. Seven-layer classification of infrastructure to improve community resilience to disasters. J. Infrastruct. Syst. 25 , 04019012 (2019).

Hipel, K. W., Kilgour, D. M. & Fang, L. Systems methodologies in vitae systems of systems. J. Nat. Disaster Sci. 32 , 63–77 (2011).

Okada, N. A scientific challenge for society under sustainability risks by addressing coping capacity, collective knowledge and action to change: a Vitae System perspective. J. Nat. Disaster Sci. 32 , 53–62 (2011).

Hay, A. Planning Resilient Infrastructure Systems 75–106 (2021).

Cutter, S. L. Resilience to what? Resilience for whom? Geogr. J. 182 , 110–113 (2016).

Wenar, L. John Rawls, < https://plato.stanford.edu/archives/sum2021/entries/rawls/ > (2021).

Van Zandt, S. Engaged Research for Community Resilience to Climate Change (Elsevier, 2020).

Walker, G. Antipode . 4 edn.

Walker, J. “Abundant Access”: a map of a community’s transit choices, and a possible goal of transit , < https://humantransit.org/2013/03/abundant-access-a-map-of-the-key-transit-choices.html > (2013).

Casali, Y., Aydin, N. Y. & Comes, T. A data-driven approach to analyse the co-evolution of urban systems through a resilience lens: a Helsinki case study. Environ. Plan. B Urban Anal. City Sci. 0 , 1–18 (2024).

Beck, A. L. & Cha, E. Probabilistic disaster social impact assessment of infrastructure system nodes. Struct. Infrastruct Eng. https://doi.org/10.1080/15732479.2022.2097268 (2022).

Matin, N., Forrester, J. & Ensor, J. What is equitable resilience? World Dev. 109 , 197–205 (2018).

Clark, S., Seager, T. & Chester, M. A capabilities approach to the prioritization of critical infrastructure. Environ. Syst. Decis. 38 , 339–352 (2018).

Coleman, N., Esmalian, A. & Mostafavi, A. Anatomy of susceptibility for shelter-in-place households facing infrastructure service disruptions caused by natural hazards. Int. J. Disaster Risk Reduct. 50 , 101875 (2020).

Esmalian, A., Wang, W. & Mostafavi, A. Multi‐agent modeling of hazard–household–infrastructure nexus for equitable resilience assessment. Comput.‐Aided Civ. Infrastruct. Eng. 37 , 1491–1520 (2021).

Dhakal, S., Zhang, L. & Lv, X. Understanding infrastructure resilience, social equity, and their interrelationships: exploratory study using social media data in hurricane Michael. Nat. Hazards Rev. 22 https://doi.org/10.1061/(ASCE)NH.1527-6996.0000512 (2021).

Adu-Gyamfi, B., Shaw, R. & Ofosu, B. Identifying exposures of health facilities to potential disasters in the Greater Accra Metropolitan Area of Ghana. Int. J. Disaster Risk Reduct. 54 , 102028 (2021).

Fan, C., Jiang, X., Lee, R. & Mostafavi, A. Equality of access and resilience in urban population-facility networks. npj Urban Sustain. 2 , 1–12 (2022).

Best, K. et al. Spatial regression identifies socioeconomic inequality in multi-stage power outage recovery after Hurricane Isaac. Nat. Hazards 117 , 1–23 (2023).

Chopra, S. S. & Khanna, V. Interconnectedness and interdependencies of critical infrastructures in the US economy: implications for resilience. Phys. A: Stat. Mech. Appl. 436 , 865–877 (2015).

Rivera, D. Z. Unincorporated and underserved: critical stormwater infrastructure challenges in South Texas Colonias. Environ. Justice https://doi.org/10.1089/env.2022.0062 (2022).

Rendon, C., Osman, K. K. & Faust, K. M. Path towards community resilience: examining stakeholders’ coordination at the intersection of the built, natural, and social systems. Sustain. Cities Soc. 68 , 102774 (2021).

Eghdami, S., Scheld, A. M. & Louis, G. Socioeconomic vulnerability and climate risk in coastal Virginia. Clim. Risk Manag. 39 , 100475 (2023).

Parsons, M. et al. Top-down assessment of disaster resilience: a conceptual framework using coping and adaptive capacities. Int. J. Disaster Risk Reduct. 19 , 1–11 (2016).

Champlin, C., Sirenko, M. & Comes, T. Measuring social resilience in cities: an exploratory spatio-temporal analysis of activity routines in urban spaces during Covid-19. Cities 135 , 104220 (2023).

Stock, A. et al. Household impacts of interruption to electric power and water services. Nat. Hazards https://doi.org/10.21203/rs.3.rs-810057/v1 (2021).

Abbou, A. et al. Household adaptations to infrastructure system service interruptions. J. Infrastruct. Syst. https://doi.org/10.1061/(asce)is.1943-555x.0000715 (2022).

Völker, B. Disaster recovery via social capital. Nat. Sustain. 5 , 96–97 (2022).

Covidence. < https://www.covidence.org/ >

Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G. & Group, P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Intern. Med. 151 , 264–269 (2009).

CDRI. Global infrastructure resilience: capturing the resilience dividend—a biennial report from the coalition for disaster resilient infrastructure (2023).

Comes, M. et al. Strategic crisis management in the European Union: Evidence Review Report (2022).

Casali, Y., Yonca, N. A., Comes, T. & Casali, Y. Machine learning for spatial analyses in urban areas: a scoping review. Sustain. Cities Soc. 104050 (2022).

Regulation (Eu) 2021/1119 of the European Parliament and of the council of 30 June 2021 establishing the framework for achieving climate neutrality and amending Regulations (EC) No 401/2009 and (EU) 2018/1999 (‘European Climate Law’) Office Journal of the European Union (2021).

Wolford, B. What is GDPR, the EU’s protection law? < https://gdpr.eu/what-is-gdpr/ >

Roman, M. et al. Satellite-based assessment of electricity restoration efforts in Puerto Rico after Hurricane Maria. PLoS One 14 , https://doi.org/10.1371/journal.pone.0218883 (2019).

Lee, C.-C., Maron, M. & Mostafavi, A. Community-scale big data reveals disparate impacts of the Texas winter storm of 2021 and its managed power outage. Hum. Soc. Sci. Commun. 9 , 1–12 (2022).

Chen, Y. & Ji, W. Public demand urgency for equitable infrastructure restoration planning. Int. J. Disaster Risk Reduct. 64 , 102510 (2021).

Batouli, M. & Joshi, D. In Proceedings of the Construction Research Congress 2022: Infrastructure Sustainability and Resilience—Selected Papers from Construction Research Congress (2022).

Ulak, M. B., Kocatepe, A., Sriram, L. M. K., Ozguven, E. & Arghandeh, R. Assessment of the hurricane-induced power outages from a demographic, socioeconomic, and transportation perspective. Nat. Hazards 92 , 1489–1508 (2018).

Dong, S., Esmalian, A., Farahmand, H. & Mostafavi, A. An integrated physical-social analysis of disrupted access to critical facilities and community service-loss tolerance in urban flooding. Comput. Environ. Urban Syst. 80 , 101443 (2020).

Baeza, A., Bojorquez-Tapia, L. A., Janssen, M. A. & Eakin, H. Operationalizing the feedback between institutional decision-making, socio-political infrastructure, and environmental risk in urban vulnerability analysis. J. Environ. Manag. 241 , 407–417 (2019).

Clark, S. S., Peterson, S. K. E., Shelly, M. A. & Jeffers, R. F. Developing an equity-focused metric for quantifying the social burden of infrastructure disruptions. Sustain. Resilient Infrastruct. 8 , 356–369 (2023).

Kohlitz, J., Chong, J. & Willetts, J. Rural drinking water safety under climate change: the importance of addressing physical, social, and environmental dimensions. Resources 9 https://doi.org/10.3390/resources9060077 (2020).

Islam, M. A., Shetu, M. M. & Hakim, S. S. Possibilities of a gender-responsive infrastructure for livelihood-vulnerable women’s resilience in rural-coastal Bangladesh. Built Environ. Project Asset Manag. 12 , 447–466 (2022).

Masterson, J. et al. Plan integration and plan quality: combining assessment tools to align local infrastructure priorities to reduce hazard vulnerability. Sustain. Resilient Infrastruct. https://doi.org/10.1080/23789689.2023.2165779 (2023).

Stough, L. M., Sharp, A. N., Resch, J. A., Decker, C. & Wilker, N. Barriers to the long-term recovery of individuals with disabilities following a disaster. Disasters 40 https://doi.org/10.1111/disa.12161 (2016).

Coleman, N., Esmalian, A. & Mostafavi, A. Equitable resilience in infrastructure systems: empirical assessment of disparities in hardship experiences of vulnerable populations during service disruptions. Nat. Hazards Rev. 21 , 04020034 (2020).

Dargin, J. S., Li, Q. C., Jawer, G., Xiao, X. & Mostafavi, A. Compound hazards: an examination of how hurricane protective actions could increase transmission risk of COVID-19. Int. J. Disaster Risk Reduct. 65 https://doi.org/10.1016/j.ijdrr.2021.102560 (2021).

Lee, C.-C., Maron, M. & Mostafavi, A. Community-scale big data reveals disparate impacts of the Texas winter storm of 2021 and its managed power outage. Hum. Soc. Sci. Commun. 9 , https://doi.org/10.1057/s41599-022-01353-8 (2021).

Grineski, S. E., Collins, T. W. & Chakraborty, J. Cascading disasters and mental health inequities: Winter Storm Uri, COVID-19 and post-traumatic stress in Texas. Soc. Sci. Med. 315 , 115523 (2022).

Dominelli, L. Mind the gap: built infrastructures, sustainable caring relations, and resilient communities in extreme weather events. Aust. Social Work 66 https://doi.org/10.1080/0312407X.2012.708764 (2013).

Sam, A. S. et al. Flood vulnerability and food security in eastern India: a threat to the achievement of the Sustainable Development Goals. Int. J. Disaster Risk Reduct. 66 https://doi.org/10.1016/j.ijdrr.2021.102589 (2021).

Ahmed, B. et al. Indigenous people’s responses to drought in northwest Bangladesh. Environ. Dev. 29 , 55–66 (2019).

Chakalian, P. M., Kurtz, L. C. & Hondula, D. After the lights go out: household resilience to electrical grid failure following hurricane Irma. Nat. Hazards Rev. https://doi.org/10.1061/(asce)nh.1527-6996.0000335 (2019).

Lee, J. Y. & Ellingwood, B. R. Ethical discounting for civil infrastructure decisions extending over multiple generations. Struct. Saf. 57 , 43–52 (2015).

Esmalian, A., Coleman, N., Yuan, F., Xiao, X. & Mostafavi, A. Characterizing equitable access to grocery stores during disasters using location-based data. Sci. Rep. 12 , https://doi.org/10.1038/s41598-022-23532-y (2022).

Mitsova, D., Esnard, A. M., Sapat, A. & Lai, B. S. Socioeconomic vulnerability and electric power restoration timelines in Florida: the case of Hurricane Irma. Nat. Hazards 94 https://doi.org/10.1007/s11069-018-3413-x (2018).

Hamlet, L. C., Kamui, M. M. & Kaminsky, J. Infrastructure for water security: coping with risks in rural Kenya. J. Water Sanitation Hygiene Dev. 10 , 481–489 (2020).

Esmalian, A., Dong, S., Coleman, N. & Mostafavi, A. Determinants of risk disparity due to infrastructure service losses in disasters: a household service gap model. Risk Anal. 41 , https://doi.org/10.1111/risa.13738 (2021).

Daramola, A. Y., Oni, O. T., Ogundele, O. & Adesanya, A. Adaptive capacity and coping response strategies to natural disasters: a study in Nigeria. Int. J. Disaster Risk Reduct. 15 , 132–147 (2016).

Yang, Y., Tatano, H., Huang, Q., Wang, K. & Liu, H. Estimating the societal impact of water infrastructure disruptions: A novel model incorporating individuals’ activity choices. Sustain. Cities Soc. 75 , 103290 (2021).

Zhu, L., Gong, Y., Xu, Y. & Gu, J. Emergency relief routing models for injured victims considering equity and priority. Ann. Oper. Res. 283 https://doi.org/10.1007/s10479-018-3089-3 (2019).

Blondin, S. Let’s hit the road! Environmental hazards, materialities, and mobility justice: insights from Tajikistan’s Pamirs. J. Ethn Migr. Stud. 48 , 3416–3432 (2022).

Basu, M., Hoshino, S., Hashimoto, S. & DasGupta, R. Determinants of water consumption: a cross-sectional household study in drought-prone rural India. Int. J. Disaster Risk Reduct. 24 , 373–382 (2017).

Sweya, L. N., Wilkinson, S. & Kassenga, G. A social resilience measurement tool for Tanzania’s water supply systems. Int. J. Disaster Risk Reduct. 65 , 102558 (2021).

Hendricks, M. D. et al. The development of a participatory assessment technique for infrastructure: neighborhood-level monitoring towards sustainable infrastructure systems. Sustain. Cities Soc. 38 , 265–274 (2018).

Oti, I. C. et al. Validity and reliability of drainage infrastructure monitoring data obtained from citizen scientists. J. Infrastruct. Syst. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000495 (2019).

Gharaibeh, N., Oti, I., Meyer, M., Hendricks, M. & Van Zandt, S. Potential of citizen science for enhancing infrastructure monitoring data and decision-support models for local communities. Risk Anal. 41 , 1104–1110 (2021).

Oswald Beiler, M. & Mohammed, M. Exploring transportation equity: development and application of a transportation justice framework. Transp. Res. Part D Transp. Environ. 47 , 285–298 (2016).

Hsieh, C.-H. & Feng, C.-M. The highway resilience and vulnerability in Taiwan. Transp. Policy 87 , 1–9 (2020).

Wang, C., Sun, J., Russell, R. & Daziano, R. A. Analyzing willingness to improve the resilience of New York City’s transportation system. Transp. Policy 69 , 10–19 (2018).

Islam, M. S. et al. Households’ willingness to pay for disaster resilient safe drinking water sources in southwestern coastal Bangladesh. Int. J. Disaster Risk Sci. 10 https://doi.org/10.1007/s13753-019-00229-x (2019).

Stock, A. et al. Household impacts of interruption to electric power and water services. Nat. Hazards 115 , 1–28 (2022).

Ulak, M. B., Yazici, A. & Ozguven, E. A prescriptive model to assess the socio-demographics impacts of resilience improvements on power networks. Int. J. Disaster Risk Reduct. 51 , 101777 (2020).

Sapat, A. Lost in translation? Integrating interdisciplinary disaster research with policy praxis. Risk Anal. 41 , 1232–1239 (2021).

Ross, D., Wilson, T. & Irwin, C. A White House call for real-time, standardized, transparent power outage data , < https://www.whitehouse.gov/ostp/news-updates/2022/11/22/a-white-house-call-for-real-time-standardized-and-transparent-power-outage-data/ > (2022).

Invest in Open Infrastructure Steering Committee. IOI’s Strategic Plan for 2021–2024 , < https://investinopen.org/about/strategic-plan-2021-2024/ > (2024).

Nuthi, K. The EU’s Latest Proposal is Another Step Toward More Public Sector Open Data in Europe , < https://datainnovation.org/2022/06/the-eus-latest-proposal-is-another-step-toward-more-public-sector-open-data-in-europe/ > (2022).

Pine, K. & Mazmanian, M. Emerging insights on building infrastructure for data-driven transparency and accountability of organizations. iConference 2015 Proceedings (2015).

Longo, J., Kuras, E., Smith, H., Hondula, D. M. & Johnston, E. Technology use, exposure to natural hazards, and being digitally invisible: Implications for policy analytics. Policy Internet 9 , 76–108 (2017).

Criado-Perez, C. Invisible Women: Exposing Data Bias in a World Designed for Men. 411 pages (Chatto & Windus, 2019).

Gbedemah, S. F., Eshun, F., Frimpong, L. K. & Okine, P. Domestic water accessibility during COVID-19: challenges and coping strategies in Somanya and its surrounding rural communities of Ghana. Urban Gov. 2 , 305–315 (2022).

Jacobsen, J. K. S., Leiren, M. D. & Saarinen, J. Natural hazard experiences and adaptations: a study of winter climate-induced road closures in Norway. Norsk Geografisk Tidsskrift-Nor. J. Geogr. 70 , 292–305 (2016).

Comes, T. AI for crisis decisions. Ethics Inf. Technol. 26 , 1–14 (2024).

Yuan, F. et al. Smart flood resilience: harnessing community-scale big data for predictive flood risk monitoring, rapid impact assessment, and situational awareness. Environ. Res. Infrastruct. Sustain. 2 , https://doi.org/10.1088/2634-4505/ac7251 (2021).

Future of Life Institute. The EU Artificial Intelligence Act , < https://artificialintelligenceact.eu/ >

Ruijer, E., Porumbescu, G., Porter, R. & Piotrowski, S. Social equity in the data era: a systematic literature review of data‐driven public service research. Public Adm. Rev. 83 , 316–332 (2023).

Sung, W. A Study on the effect of smartphones on the digital divide. In Proc of the 16th Annual International Conference on Digital Government Research 276–282 (2015).

Blake, A., Hazel, A., Jakurama, J., Matundu, J. & Bharti, N. Disparities in mobile phone ownership reflect inequities in access to healthcare. PLOS Digit. Health 2 , e0000270 (2023).

Ortiz, J. et al. Giving voice to the voiceless: The use of digital technologies by marginalized groups. Communications of the Association for Information Systems (2019).

Soden, R. et al. The importance of accounting for equity in disaster risk models. Commun. Earth Environ. 4 , 386 (2023).

Esmalian, A., Dong, S. & Mostafavi, A. Susceptibility curves for humans: empirical survival models for determining household-level disturbances from hazards-induced infrastructure service disruptions. Sustain. Cities Soc. 66 , 102694 (2021).

Holguin-Veras, J., Perez, N., Jaller, M., Van Wassenhove, L. N. & Aros-Vera, F. On the appropriate objective function for post-disaster humanitarian logistics models. J. Oper. Manag. 31 , 262–280 (2013).

Khan, M. T. I., Anwar, S., Sarkodie, S. A., Yaseen, M. R. & Nadeem, A. M. Do natural disasters affect economic growth? The role of human capital, foreign direct investment, and infrastructure dynamics. Heliyon 9 , e12911 (2023).

Wise, R. M., Capon, T., Lin, B. B. & Stafford-Smith, M. Pragmatic cost–benefit analysis for infrastructure resilience. Nat. Clim. Change 12 , 881–883 (2022).

de Bruijn, K. M. et al. Flood risk management through a resilience lens. Commun. Earth Environ. 3 , 285 (2022).

Teodoro, J. D., Doorn, N., Kwakkel, J. & Comes, T. Flexibility for intergenerational justice in climate resilience decision-making: an application on sea-level rise in the Netherlands. Sustain. Sci. 18 , 1355–1365 (2023).

Clements, R., Alizadeh, T., Kamruzzaman, L., Searle, G. & Legacy, C. A systematic literature review of infrastructure governance: cross-sectoral lessons for transformative governance approaches. J. Plan. Lit. 38 , 70–87 (2022).

Seyedrezaei, M., Becerik-Gerber, B., Awada, M., Contreras, S. & Boeing, G. Equity in the built environment: a systematic review. Build. Environ. 245 , 110827 (2023).

Oelz, M., Dhir, R. K. & Harsdorff, M. Indigenous peoples and climate change: from victims to change agents through decent work. International Labour Office, Gender, Equality and Diversity Branch , 1–56 (2017).

Berrang-Ford, L. et al. A systematic global stocktake of evidence on human adaptation to climate change. Nat. Clim. Change 11 , 989–1000 (2021).

Masterson, J. & Cooper, J. Engaged Research for Community Resilience to Climate Change (Elsevier, 2020).

Arnstein, S. R. A ladder of citizen participation. J. Am. Inst. Plann. 35 , 216–224 (1969).

Download references

Acknowledgements

This material is based in part upon work supported by the National Science Foundation under Grant CMMI-1846069 (CAREER) and the support of the National Science Foundation Graduate Research Fellowship. We would like to thank the contributions of our undergraduate students: Nhat Bui, Shweta Kumaran, Colton Singh, and Samuel Baez.

Author information

Authors and affiliations.

Zachry Department of Civil and Environmental Engineering, Urban Resilience.AI Lab, Texas A&M University, College Station, TX, USA

Natalie Coleman, Xiangpeng Li & Ali Mostafavi

TPM Resilience Lab, TU Delft, Delft, South Holland, the Netherlands

You can also search for this author in PubMed   Google Scholar

Contributions

All authors critically revised the manuscript, gave final approval for publication, and agree to be held accountable for the work performed therein. N.C. was the lead Ph.D. student researcher and first author, who was responsible for guiding data collection, performing the main part of the analysis, interpreting the significant results, and writing most of the manuscript. X.L. was responsible for guiding data collection, figure creations, and assisting in the manuscript. T.C. and A.M. were the faculty advisors for the project and provided critical feedback on the literature review development, analysis and manuscript.

Corresponding author

Correspondence to Natalie Coleman .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Coleman, N., Li, X., Comes, T. et al. Weaving equity into infrastructure resilience research: a decadal review and future directions. npj Nat. Hazards 1 , 25 (2024). https://doi.org/10.1038/s44304-024-00022-x

Download citation

Received : 10 November 2023

Accepted : 29 May 2024

Published : 02 September 2024

DOI : https://doi.org/10.1038/s44304-024-00022-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

what is quantitative research in social work

IMAGES

  1. Quantitative Research Methods for Social Work: Making Social Work Count

    what is quantitative research in social work

  2. Quantitative Research in Social Sciences-PhdScholars

    what is quantitative research in social work

  3. 10. Quantitative sampling

    what is quantitative research in social work

  4. (PDF) Quantitative research methods for social work: Making social work

    what is quantitative research in social work

  5. (PDF) A Quick Guide to Quantitative Research in the Social Sciences A

    what is quantitative research in social work

  6. PPT

    what is quantitative research in social work

VIDEO

  1. social work research : MEANING , DEFINITION AND OBJECTIVES OF SOCIAL WORK RESEARCH

  2. Social Work Research: Steps of Research #researchstudy #socialresearch #BSW #MSW #UGC-NET

  3. Problem Formulation and the Literature Review Part 2

  4. Sociology Lecture #23 Qualitative and Quantitative Analysis

  5. Differences Between Social Research and Social Work Research

  6. Dimensions of Social Science Research

COMMENTS

  1. What is Quantitative Research?

    Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques.Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

  2. Social Work Research Methods That Drive the Practice

    Social work researchers will send out a survey, receive responses, aggregate the results, analyze the data, and form conclusions based on trends. Surveys are one of the most common research methods social workers use — and for good reason. They tend to be relatively simple and are usually affordable.

  3. Nature and Extent of Quantitative Research in Social Work Journals: A

    Introduction. Quantitative research methods are an essential aspect of social work research. By applying these methods, social work scholars can evaluate interventions more accurately, generalise findings and test theories ().Social work scholars have therefore outlined the need to increase the usage and quality of the quantitative methods (Sheppard, 2016; Lippold et al., 2017).

  4. The impact of quantitative research in social work

    The importance of quantitative research in the social sciences generally and social work specifically has been highlighted in recent years, in both an international and a British context. Consensus opinion in the UK is that quantitative work is the 'poor relation' in social work research, leading to a number of initiatives.

  5. Social Work Research Methods

    Social work research means conducting an investigation in accordance with the scientific method. The aim of social work research is to build the social work knowledge base in order to solve practical problems in social work practice or social policy. Investigating phenomena in accordance with the scientific method requires maximal adherence to ...

  6. Quantitative Research Methods for Social Work: Making Social Work Count

    This book arose from funding from the Economic and Social Research Council to address the quantitative skills gap in the social sciences. The grants were applied for under the auspices of the Joint University Council Social Work Education Committee to upskill social work academics and develop a curriculum resource with teaching aids.

  7. The Positive Contributions of Quantitative Methodology to Social Work

    Quantitative social work research does face peculiarly acute difficulties arising from the intangible nature of its variables, the fluid, probabilistic way in which these variables are connected, and the degree to which outcome criteria are subject to dispute. (pp. 9-10)

  8. Shaping Social Work Science: What Should Quantitative Researchers Do

    Second, social work researchers should incorporate the latest advances in methods from other disciplines. Third, researchers should use quantitative methods to address the most pressing and challenging issues of social work research and practice.

  9. Quantitative Research

    Summary. This entry describes the definition, history, theories, and applications of quantitative methods in social work research. Unlike qualitative research, quantitative research emphasizes precise, objective, and generalizable findings. Quantitative methods are based on numerous probability and statistical theories, with rigorous proofs and ...

  10. What Is Quantitative Research? An Overview and Guidelines

    Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.

  11. What Is Quantitative Research?

    Quantitative research is the opposite of qualitative research, which involves collecting and analyzing non-numerical data (e.g., text, video, or audio). Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc. Quantitative research question examples

  12. Quantitative and Qualitative Research

    The purpose of quantitative research is to generate knowledge and create understanding about the social world. Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a ...

  13. Causality and Causal Inference in Social Work: Quantitative and

    This article has four aims: (1) provide an overview of the nature of causality; (2) examine how causality is treated in social work research and practice; (3) highlight the role of quantitative and qualitative methods in the search for causality; and (4) demonstrate how both methods can be employed to support a "science" of social work ...

  14. A Quick Guide to Quantitative Research in the Social Sciences

    About the Book. This resource is intended as an easy-to-use guide for anyone who needs some quick and simple advice on quantitative aspects of research in social sciences, covering subjects such as education, sociology, business, nursing. If you area qualitative researcher who needs to venture into the world of numbers, or a student instructed ...

  15. 11. Quantitative measurement

    Step 1: Specifying variables and attributes. The first component, the variable, should be the easiest part. At this point in quantitative research, you should have a research question that has at least one independent and at least one dependent variable. Remember that variables must be able to vary.

  16. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  17. 4.3 Quantitative research questions

    The type of research you are conducting will impact the research question that you ask. Probably the easiest questions to think of are quantitative descriptive questions. For example, "What is the average student debt load of MSW students?" is a descriptive question—and an important one. We aren't trying to build a causal relationship here.

  18. Quantitative vs. Qualitative Research

    Quantitative research articles will tackle research questions that can be measured numerically and described using statistics. An example of quantitative research would be a randomized controlled trial. Hints: contains statistical analysis; large sample size; objective - little room to argue with the numbers

  19. Research design in social work: Qualitative and quantitative methods

    Research design in social work: Qualitative and quantitative methods Anne Campbell, Brian Taylor and Anne McGlade. Sally Richards View all authors and ... The Practice of Research in Social Work (2nd ed.) Thousand Oaks, CA: SAGE, 2009. 474 p. $79.95. ISBN 978-1-4129-6829. Show details Hide details. Gerald Cochran. Research on Social Work ...

  20. Quantitative Research Methods for Social Work: Making Social Work Count

    The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods - including reliability, validity, probability, variables and hypothesis testing - and explores key areas of data collection, analysis and evaluation ...

  21. Quantitative research methods for social work: Making social work count

    PDF | On Jan 1, 2017, Barbra Teater and others published Quantitative research methods for social work: Making social work count | Find, read and cite all the research you need on ResearchGate

  22. Qualitative vs Quantitative Research: What's the Difference?

    The main difference between quantitative and qualitative research is the type of data they collect and analyze. Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language. Quantitative research collects numerical ...

  23. Quantitative Research

    Social Science Research: Quantitative research is used in social science research to study human behavior, attitudes, and social structures. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform social policies, educational programs, and community interventions. ...

  24. "Walking the tightrope:" Clinical social workers' use of diagnostic and

    Although there have been quantitative surveys of social workers' use of the DSM, this is the first qualitative study to examine how social workers actually navigate these two worldviews. Thirty clinical social workers took part in individual interviews, and their responses were analyzed thematically using Hyper- RESEARCH, a qualitative data ...

  25. Weaving equity into infrastructure resilience research: a decadal

    In analyzing quantitative data, most research has focused on using descriptive statistics, linear models, and Moran's I statistic which have been effective in pinpointing areas with heightened ...