• Skip to primary navigation
  • Skip to main content
  • Plan Your Visit

Blog Hero Image

Evidence-Based? Research-Based? What does it all Mean?

a research based meaning

Have you ever felt puzzled by trying to discern the difference between the terms, evidence-based and research-based ? Or have you ever found yourself feeling intimidated when someone asked you, “But is that program/practice evidence-based?” I know I have. To help me clarify my understanding, I reached out to my colleagues here at the Center and my old friend, Google. I’ve come to the following understandings and a bit of friendly advice – stay curious! Please keep reading if you’re feeling as perplexed as I am.

Clarifying the Difference between Research-Based and Evidence-Based

My current working definition of research-based instruction has come to mean those practices/programs that are based on well-supported and documented theories of learning. The instructional approach is based on research that supports the principles it incorporates, but there may not be specific research or its own evidence to directly demonstrate its effectiveness.

Defining evidence-based practice has been more headache-inducing as the term is frequently and widely used to mean a myriad of things. Currently, I have come to understand that evidence-based practices are those that have been researched with either experimental studies (think randomly assigned control groups), quasi-experimental studies (comparison groups that are not randomized), or studies that were well-designed and well-implemented correlational studies with statistical controls for selection bias. In brief, a specific study (or studies) has been done to test its effectiveness.

By no means are these definitions ready for Merriam-Webster, but they are helping me to make sense of the terms.

So what do you say or ask when “research” is thrown your way?

Recently, I met with a group of literacy coaches and we discussed how to respond when a fellow educator approaches them with “research” either supporting or refuting an instructional practice or program. My best advice to them probably sounded like a Viking River Cruise commercial – “Be curious!” Below are some examples of ways to respond to demonstrate that you are open to learning more.

  • Thank you for bringing that information to my attention. Can you share your source of information or the article so I can read it too and we can talk about it together?
  • Please talk more about what you have learned (or read or heard). I’m curious to learn more about: a. Whether the research was published in a peer-reviewed journal or if the research was sponsored by a publisher or other interested party. b. The sample size or the number of schools/students involved in the study. c. The demographics of the subjects involved in the study. d. The type of research conducted.

3. I’m wondering how many studies have been conducted that replicate those results. 4. That research sounds important. Can you share the source with me? Perhaps it will be helpful for our grade level team to read it and discuss the findings together.

As educators, we are always looking for the most effective ways to support our students. Stay open to new findings and be sure to slow the process down so you probe deeper to learn if there truly is current research to back what people are claiming. Then be sure to evaluate the credibility of the source of information, the methods or processes used to critique or research, and don’t forget to rely upon trusted sources like What Works Clearinghouse . You might also appreciate a lecture presented by Maren Aukerman that discusses comprehensive, research-informed literacy instruction . The more you dig, the more you may find that many practices and programs touted as evidence-based are either based on personal anecdotes and stories or the research base is flimsy at best.

You might also be interested in

a research based meaning

Starting the School Year Strong: A Guide for Literacy Coaches

The beginning of a new school year is always a busy, exciting time. To start your year off strong, here are six practical ideas for facilitating learning and reflection with your colleagues.

a research based meaning

Writing Identities: When do they begin?  

Find out how teachers can instill the belief in their students that they’re writers with valuable stories to share and communicate.

a research based meaning

Ralph Fletcher: A Writer’s Mentor 

When I need a little writerly advice, I often turn to Ralph Fletcher. We’ve never met, but his advice about writing for both students and teachers has stayed with me for the last 20 years.

If you have any questions, please contact the Center.

Phone: 617.349.8424

Hours: 8:00 am–5:00 pm

Mailing Address Lesley University 29 Everett Street Cambridge, MA 02138

Aperture Education logo

  • Personalized Learning: MTSS, PBIS & Student Support
  • Mental Health, Wellbeing & Resilience
  • College & Career Readiness
  • School Climate & Culture
  • DESSA Aperture System
  • DESSA Student Self-Report
  • DESSA Second Step® Assessments
  • Strategies and Interventions
  • Aperture Academy
  • Aperture Advisors
  • Customer Success and Implementation
  • Professional Learning and Training
  • Support Portal
  • Case Studies
  • Parent Resources
  • Research & White Papers

“Evidence-Based” vs. “Research-Based”: Understanding the Differences

Often, when reviewing resources, programs, or assessments, we might come across terms like “evidence-based” or “research-based.” These terms each tell us something about the resources that they describe and the evidence supporting them. Understanding each term’s meaning can help us make informed decisions when selecting and implementing resources.

So what do these terms mean, exactly?

Typically, the terms  Evidence-Based   Practices  or  Evidence-Based   Programs  refer to individual practices (for example, single lessons or in-class activities) or programs (for example, year-long curricula) that are considered effective based on scientific evidence. To deem a program or practice “evidence-based,” researchers will typically study the impact of the resource(s) in a controlled setting – for example, they may study differences in skill growth between students whose educators used the resources and students whose educators did not. If sufficient research suggests that the program or practice is effective, it may be deemed “evidence-based.”

Evidence-Informed  (or  Research-Based )  Practices  are practices that were developed based on the best research available in the field. This means that users can feel confident that the strategies and activities included in the program or practice have a strong scientific basis for their use. Unlike Evidence-Based Practices or Programs, Research-Based Practices have not been researched in a controlled setting.

What about assessment?

Terms like “evidence-based” and “research-based” are often used to describe  intervention activities,  like strategies or curricula designed to build skills in specific areas. But the process of measuring skills with assessment tools can be evidence-based as well. An assessment process can be considered  Evidence-Based Assessment  if:

  • The choice of skills to be measured by the assessment was informed by research;
  • The assessment method and measurement tools used are informed by scientific research and theory and meet the relevant standards for their intended uses; and
  • The way that the assessment is implemented and interpreted is backed by research.

Using evidence-based assessment to guide or evaluate an intervention gives us confidence that the process is well-suited for our purpose, is grounded in scientific theory, and will be effective for our students.

What Standards Exist for Educational Assessments?

The process of Evidence-Based Assessment involves the use of a measurement tool that “meets the relevant standards for their intended uses.” What are the relevant standards, and how can we know if a tool meets them?

Some foundational standards for educational assessments, as compiled by experts in the educational, psychological, and assessment fields, include:

  • Validity for an Intended Use:  the tool should have been researched to determine that it is valid, or appropriate, for the decisions we may make based on its results. Just like we wouldn’t use a math quiz to inform whether a student needs additional practice with reading comprehension, we shouldn’t use an assessment for purposes outside of those that research has deemed “valid.”
  • Reliability:  the tool should have been researched to ensure that it meets expectations for reliability, or consistency. For example, researchers might explore whether the tool produces similar results if it is completed twice in a short period of time. Reliability can be explored via a variety of methods, depending on the measurement tool.
  • Fairness:  the tool should have been researched to explore how fair, or unbiased, it is among different subgroups of students, such as subgroups based on race, ethnicity, or cultural background. Using a biased measurement tool can lead to biased decision-making and threaten our ability to provide equitable services.

Specific standards within each of these domains, and others, are compiled in the handbook, “Standards for Educational and Psychological Testing” (2014), written by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education. This handbook can be a useful companion when reviewing the specific evidence behind measurement tools.

In Conclusion

Terms like “evidence-based” or “research-based” are useful indicators of the type of evidence behind programs, practices, or assessments – however, they can only tell us so much about the specific research behind each tool. For situations where more information on a resource’s evidence base would be beneficial, it may be helpful to request research summaries or articles from the resource’s publisher for further review.

Further Reading

  • Hunsley, J., & Mash, E. J. (2007). Evidence-based assessment. Rev. Clin. Psychol., 3, 29-51 .
  • Joint Committee on the Standards for Educational and Psychological Testing of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. The American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education .
  • S. Department of Education (2016). Using Evidence to Strengthen Education Investments .

Interested in learning more about Aperture Education's research-based universal screeners and assessments? Contact our expert advisors today !

What can we help you with?

Discover more from aperture education.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

a research based meaning

Solution Tree Blog

Research Based Learning: a Lifelong Learning Necessity

a research based meaning

  “Give a person a fish and he will eat for a day; teach a person to fish and she will eat for a lifetime.” – Adapted from a saying by an unknown author

What is Research-Based Learning? Research-based learning (RBL) consists of a framework that helps to prepare students to be lifelong inquirers and learners. The term “research,” which often conjures up a picture of students writing research reports, is here defined as a way of thinking about teaching and learning, a perspective, a paradigm. It is a specific approach to classroom teaching that places less emphasis on teacher-centered learning of content and facts and greater emphasis on students as active researchers.

In a research-based learning approach, students actively search for and then use multiple resources, materials, and texts in order to explore important, relevant, and interesting questions and challenges. They find, process, organize and evaluate information and ideas as they build reading skills and vocabulary. They learn how to read for understanding, form interpretations, develop and evaluate hypotheses, and think critically and creatively. They learn how to solve problems, challenges, and dilemmas. Finally, they develop communication skills through writing and discussion.

In the five stages of research-based learning, students:

a. Identify and clarify issues, questions, challenges, and puzzles. A key component of research-based learning is the identification and clarification of issues, problems, challenges and questions for discussion and exploration. The learner is able to seek relevancy in the work they are doing and to become deeply involved in the learning process. b. Find and process information. Students are tasked with searching for, finding, closely reading, processing, and using information related to the identified issue and question from one or more sources. As they seek out resources and read information, and then organize, classify, categorize, define, and conceptualize data. In the process, they become better readers. c. Think critically and creatively. Students are provided with the opportunity to use their researched information to compare and contrast, interpret, apply, infer, analyze, synthesize, and think creatively. d. Apply knowledge and ideas and draw conclusions. Students use what they have learned to draw conclusions, complete an authentic task, summarize results, solve problems, make decisions, or answer key questions. e. Communicate results. Students communicate results of their research activities in a number of possible ways, such as through a written research report, a persuasive essay, a book designed to teach younger students, a math problem solution, a plan of action, or a slide presentation to members of the community.

The Teacher’s Role Teachers play a key role in the success of research-based instruction by engaging and involving students in information gathering and processing. While teachers might occasionally provide information through lectures, and textbooks are used as a source of information, there is an emphasis placed on students learning how to seek out and process resources themselves. A teacher provides a climate that supports student curiosity and questioning . Teachers enable students to ask questions and pose problems. Students are invited to ask and answer questions. The classroom climate is conducive to using higher-order thinking and problem-solving skills to apply knowledge to solve problems. Teachers attempt to build ways for students to take ownership of their learning, to create a value and a purpose for learning.

In a research-based learning classroom, teachers often act more like a coach, guiding students as they develop questions and problems, helping students to find, read, sort, and evaluate information, giving students the opportunity to draw their own conclusions, and providing the time and the opportunity for students to communicate results.

Finally, one of the most important components of a successful research-based learning program is the ability to help students understand and apply this approach consistently, by providing them with research-based opportunities for learning. Thus students are encouraged to bring in additional materials and resources to help the class understand a topic, choose and complete projects and performance tasks as part of their units of study, and discuss issues using evidence from sources of information. The classroom climate and environment continually encourage students to express their opinions, problem solve, and think at higher levels.

Student Outcomes Significant outcomes occur when this approach is utilized over time. Learning how to search for and find reliable information and resources is a skill that is important for a lifetime of learning, Reading many different kinds of texts strengthens reading skills and builds vocabulary. Thinking skills are developed as students classify, organize, and synthesize information. “Habits of mind,” such as perseverance and resilience are strengthened through long-term projects. Writing skills are developed through note-taking, reflection activities, and many different types of writing tasks.

In addition, students feel greater ownership for their learning and the learning process and thus develop greater self-esteem with regard to learning. There is greater interest in and curiosity about learning and a willingness to work harder to learn. Students are more likely to retain information longer because it is more meaningful to them and organized in a more interesting fashion.

Finally, students are able to learn the difference between reliable and unreliable information, ideas, and resources, a key need in today’s world with so much misleading and erroneous information.

Summary The stages of research-based learning, key activities, and student outcomes are summarized in chart one, below. This framework also fits nicely with the four-phase model of instruction examined in my book Teaching for Lifelong Learning: How to Prepare Students for a Changing World (Solution Tree Press, 2021) and in a previous Solution Tree blog post: Using a Four-Phase Instructional Model to Plan and Teach for Lifelong Learning .

Teachers who provide a structure for research-based learning as part of their regular teaching routine should experience greater interest and involvement on the part of their students, and help students develop both skills and a fundamental knowledge base that are important for a lifetime of learning.

information table

Zorfass, Judith and Copel. Harriet. The I-Search: Guiding Students Toward Relevant Research. In Educational Leadership, Volume 53, Number 1, September 1995, pp. 48-51.

Zorfass, Judith and Copel, Harriet (1998) Teaching Middle School Students to be Active Researchers. Alexandria, VA: Association for Supervision and Curriculum Development.

Solution Tree

Here's some awesome bio info about me! Short codes are not allowed, but perhaps we can work something else out.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Welcome to a collaborative resource that offers practical guidance and inspiring stories from authors and expert practitioners.

Subscribe to Our Blog

a research based meaning

3 Simple Steps to Turn the Tables on Post-Covid Adversity

a research based meaning

How to Practice Daily Mindfulness Even as a Busy Educator

a research based meaning

Power Up for the New School Year with These Innovative Books

a research based meaning

The Power of Professional Learning Communities in Our Schools

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it important?

Affiliations.

  • 1 Johns Hopkins Evidence-based Practice Center, Division of General Internal Medicine, Department of Medicine, Johns Hopkins University, Baltimore, MD, USA.
  • 2 Digital Content Services, Operations, Elsevier Ltd., 125 London Wall, London, EC2Y 5AS, UK.
  • 3 School of Nursing, McMaster University, Health Sciences Centre, Room 2J20, 1280 Main Street West, Hamilton, Ontario, Canada, L8S 4K1; Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway.
  • 4 Department of Sport Science and Clinical Biomechanics, University of Southern Denmark, Campusvej 55, 5230, Odense M, Denmark; Department of Physiotherapy and Occupational Therapy, University Hospital of Copenhagen, Herlev & Gentofte, Kildegaardsvej 28, 2900, Hellerup, Denmark.
  • 5 Musculoskeletal Statistics Unit, the Parker Institute, Bispebjerg and Frederiksberg Hospital, Copenhagen, Nordre Fasanvej 57, 2000, Copenhagen F, Denmark; Department of Clinical Research, Research Unit of Rheumatology, University of Southern Denmark, Odense University Hospital, Denmark.
  • 6 Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway. Electronic address: [email protected].
  • PMID: 32979491
  • DOI: 10.1016/j.jclinepi.2020.07.020

Objectives: There is considerable actual and potential waste in research. Evidence-based research ensures worthwhile and valuable research. The aim of this series, which this article introduces, is to describe the evidence-based research approach.

Study design and setting: In this first article of a three-article series, we introduce the evidence-based research approach. Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner.

Results: We describe evidence-based research and provide an overview of the approach of systematically and transparently using previous research before starting a new study to justify and design the new study (article #2 in series) and-on study completion-place its results in the context with what is already known (article #3 in series).

Conclusion: This series introduces evidence-based research as an approach to minimize unnecessary and irrelevant clinical health research that is unscientific, wasteful, and unethical.

Keywords: Clinical health research; Clinical trials; Evidence synthesis; Evidence-based research; Medical ethics; Research ethics; Systematic review.

Copyright © 2020 Elsevier Inc. All rights reserved.

PubMed Disclaimer

Similar articles

  • Evidence-Based Research Series-Paper 2 : Using an Evidence-Based Research approach before a new study is conducted to ensure value. Lund H, Juhl CB, Nørgaard B, Draborg E, Henriksen M, Andreasen J, Christensen R, Nasser M, Ciliska D, Clarke M, Tugwell P, Martin J, Blaine C, Brunnhuber K, Robinson KA; Evidence-Based Research Network. Lund H, et al. J Clin Epidemiol. 2021 Jan;129:158-166. doi: 10.1016/j.jclinepi.2020.07.019. Epub 2020 Sep 26. J Clin Epidemiol. 2021. PMID: 32987159
  • Evidence-Based Research Series-Paper 3: Using an Evidence-Based Research approach to place your results into context after the study is performed to ensure usefulness of the conclusion. Lund H, Juhl CB, Nørgaard B, Draborg E, Henriksen M, Andreasen J, Christensen R, Nasser M, Ciliska D, Tugwell P, Clarke M, Blaine C, Martin J, Ban JW, Brunnhuber K, Robinson KA; Evidence-Based Research Network. Lund H, et al. J Clin Epidemiol. 2021 Jan;129:167-171. doi: 10.1016/j.jclinepi.2020.07.021. Epub 2020 Sep 23. J Clin Epidemiol. 2021. PMID: 32979490
  • How to improve the study design of clinical trials in internal medicine: recent advances in the evidence‑based methodology. Lund H, Bała M, Blaine C, Brunnhuber K, Robinson KA. Lund H, et al. Pol Arch Intern Med. 2021 Sep 30;131(9):848-853. doi: 10.20452/pamw.16076. Epub 2021 Sep 30. Pol Arch Intern Med. 2021. PMID: 34590450
  • [Methods of evidence mapping. A systematic review]. Schmucker C, Motschall E, Antes G, Meerpohl JJ. Schmucker C, et al. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2013 Oct;56(10):1390-7. doi: 10.1007/s00103-013-1818-y. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2013. PMID: 23978984 Review. German.
  • Clinical Study Designs and Sources of Error in Medical Research. Kacha AK, Nizamuddin SL, Nizamuddin J, Ramakrishna H, Shahul SS. Kacha AK, et al. J Cardiothorac Vasc Anesth. 2018 Dec;32(6):2789-2801. doi: 10.1053/j.jvca.2018.02.009. Epub 2018 Feb 7. J Cardiothorac Vasc Anesth. 2018. PMID: 29571641 Review. No abstract available.
  • Historical evolution of cancer genomics research in Latin America: a comprehensive visual and bibliometric analysis until 2023. Lozada-Martinez ID, Lozada-Martinez LM, Cabarcas-Martinez A, Ruiz-Gutierrez FK, Aristizabal Vanegas JG, Amorocho Lozada KJ, López-Álvarez LM, Fiorillo Moreno O, Navarro Quiroz E. Lozada-Martinez ID, et al. Front Genet. 2024 Jan 18;15:1327243. doi: 10.3389/fgene.2024.1327243. eCollection 2024. Front Genet. 2024. PMID: 38304339 Free PMC article.
  • [Latin American research in heart failure: visual and bibliometric analysis of the last 20 years]. Batista Mendoza G, Giraldo Puentes GA, Rosero Palacios E, Brett Cano PJ, Ramírez Reyes KT, Zapata Valencia CM, Suarez Uribe YL, Reyes AF, Acuña Picón-Jaimes YA. Batista Mendoza G, et al. Arch Peru Cardiol Cir Cardiovasc. 2023 Dec 27;4(4):141-150. doi: 10.47487/apcyccv.v4i4.328. eCollection 2023 Oct-Dec. Arch Peru Cardiol Cir Cardiovasc. 2023. PMID: 38298417 Free PMC article. Spanish.
  • The use of systematic reviews for conducting new studies in physiotherapy research: a meta-research study comparing author guidelines of physiotherapy-related journals. Rosen D, Reiter NL, Vogel B, Prill R. Rosen D, et al. Syst Rev. 2024 Jan 13;13(1):28. doi: 10.1186/s13643-023-02427-7. Syst Rev. 2024. PMID: 38216987 Free PMC article.
  • Use of Evidence-Based Research Approach in RCTs of Acupuncture-Related Therapies for Primary Dysmenorrhea: A Meta-Research. Hu XY, Tian ZY, Chen H, Hu XY, Ming TY, Peng HX, Jiao RM, Shi LJ, Xiu WC, Yang JW, Gang WJ, Jing XH. Hu XY, et al. Chin J Integr Med. 2024 Jun;30(6):551-558. doi: 10.1007/s11655-023-3711-3. Epub 2023 Nov 21. Chin J Integr Med. 2024. PMID: 37987960
  • Exploring the diverse definitions of 'evidence': a scoping review. Yu X, Wu S, Sun Y, Wang P, Wang L, Su R, Zhao J, Fadlallah R, Boeira L, Oliver S, Abraha YG, Sewankambo NK, El-Jardali F, Norris SL, Chen Y. Yu X, et al. BMJ Evid Based Med. 2024 Jan 19;29(1):37-43. doi: 10.1136/bmjebm-2023-112355. BMJ Evid Based Med. 2024. PMID: 37940419 Free PMC article.

Publication types

  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Elsevier Science

Other Literature Sources

  • scite Smart Citations
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Created by the Great Schools Partnership , the GLOSSARY OF EDUCATION REFORM is a comprehensive online resource that describes widely used school-improvement terms, concepts, and strategies for journalists, parents, and community members. | Learn more »

Share

Evidence-Based

A widely used adjective in education, evidence-based refers to any concept or strategy that is derived from or informed by objective evidence—most commonly, educational research or metrics of school, teacher, and student performance. Among the most common applications are evidence-based decisions , evidence-based school improvement , and evidence-based instruction . The related modifiers data-based, research-based , and scientifically based are also widely used when the evidence in question consists largely or entirely of data, academic research, or scientific findings.

If an educational strategy is evidence-based, data-based, or research-based, educators compile, analyze, and use objective evidence to inform the design an academic program or guide the modification of instructional techniques. For example, ninth-grade teachers in a high school may systematically review academic data on incoming freshman to determine which students may need some form of specialized assistance and which students may be at greater risk of dropping out or struggling academically. By looking at absenteeism, disciplinary infractions, and course-failure rates during middle school, teachers can identify students who are more likely to struggle in ninth grade, and they can then proactively prepare academic programs, services, and learning opportunities to reduce the likelihood that those students will fail or drop out. In this case, educators are taking an evidence-based approach to instructing and supporting students in ninth grade. (This specific example is often called an “early warning system.”)

While research and “quantitative” numerical data are arguably the most common forms of evidence used in education and school reform, educators also use a wide variety of “qualitative” information to diagnose student-learning needs or improve academic programming, including discussions with students and parents, work products created by students and teachers, the results of surveys completed by students and school staff, or observations of teaching —among many other possible forms of evidence. In professional learning communities , for example, groups of teachers may meet regularly to discuss evidence such as research literature, lesson materials, or student-work samples as a way to improve their teaching skills or modify instructional techniques in ways that work better for certain students. Teachers in the group may also observe colleagues while they teach and then provide them with constructive feedback and advice. For a related discussion, see action research .

The use of objective evidence in education reform has grown increasingly common in recent decades, and a wide variety of research and data are now regularly used to identify strengths and weaknesses in schools, guide the design of academic programming, or hold schools and teachers accountable for producing better educational results, for example. From tracking standardized-test scores and graduation rates to using student information systems, sophisticated databases, and other new educational technologies, today’s educators are more likely to use educational data, in one form or another, on a regular basis. In addition, educational research is increasingly being used by reform organizations, charitable foundations, elected officials, policy makers, school leaders, and teachers to inform everything from federal education policies to philanthropic investments to specialized teaching techniques in the classroom.

The growing use of evidence, data, and research in education mirrors a general information-age trend, in a wide variety of fields and professions, toward more objective, fact-based decisions. Historically, educators had to rely largely on personal experience, professional judgment, past practices, established conventions, and other subjective factors to make decisions about how and what to teach—all of which could potentially be inaccurate, misguided, biased, or even detrimental to students. With the advent of modern data systems and research techniques, educators now have access to more objective, precise, and accurate information about student learning, academic achievement, and educational attainment.

Debates about evidence-based approaches to education or school reform depend largely on the evidence and context in question, including how the available evidence is specifically being used or not used. For example, in some situations educators may argue that there is now such an overabundance of data that it has become infeasible, or even impossible, for schools and educators to act thoughtfully and appropriately on available evidence, given that merely collecting, processing, and analyzing so much data or research findings requires far more money, time, human resources, and specialized expertise than schools, districts, or state education agencies have. In other cases, schools and school systems may largely or entirely ignore available evidence; consequently, readily diagnosable school problems may go unaddressed, while effective, well-established teaching practices are never used.

The quality of available evidence, as well as the methods used to interpret research and data, can also contribute to ongoing debates. As in many other fields and professions, education is fraught with conflicting viewpoints, beliefs, and philosophies that can give rise to the misinterpretation or distortion of seemingly concrete and objective evidence. For example, the selection and presentation of data can be manipulated to confirm or disprove existing theories, and cherry-picking certain research findings, and ignoring others, can be used to generate the perception that certain educational strategies are more successful than they truly are. When researching or reporting on evidence-based approaches to school reform, it is important to investigate the source, quality, reliability, and validity of the evidence in question.

It is also worth noting that while both quantitative and qualitative evidence are widely used in education, there is debate about how these different types of evidence should be weighed and considered. For example, some educators believe that qualitative evidence is “squishy” and more susceptible to subjectivity, while others may argue that quantitative evidence is too narrow and limited and that it should not be used without taking other forms of evidence into consideration, including the opinions and perspectives of students and teachers.

For a related discussion, see measurement error .

Creative Commons License

Alphabetical Search

  • Health Technology Assessment
  • Health Technology
  • Medical Technology
  • Evidence Synthesis

What Evidence-Based Research is and why is it important?

  • September 2020
  • Journal of Clinical Epidemiology 129(3)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Klara Brunnhuber at Relx Group

  • University of Southern Denmark

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the authors.

Yuanxi Jia

  • Zhirong Yang
  • Karen A Robinson

Mahbubul Syeed

  • J CLIN EPIDEMIOL

Malgorzata Bala

  • Solveig Tenckhoff

Pascal Probst

  • Elisabeth Stelson
  • Emma-Louise Aveling

Wendy Dayanna Cuji-Galarza

  • Jhony Díaz Vallejo
  • J SURG ONCOL
  • Antonio Reyes-Monasterio
  • Mecker G Möller

Régis Resende Paulinelli

  • Luz Miryam Lozada-Martinez

Elkin Navarro Quiroz

  • KNEE SURG SPORT TR A

Miloslav Klugar

  • Gisselle Batista Mendoza
  • Gustavo Alberto Giraldo Puentes
  • Enmanuel Rosero Palacios

Yelson Alejandro Picón Jaimes

  • Ivo Luna Mazzola

Rosa Lara Verástegui

  • J INT MED RES
  • Xiaoxin Zhang
  • Shigang Guo

Helene Eckhardt

  • Stephanie Duncan
  • Heather Horton
  • Richard Smith

Heather Larkin

  • Jinfeng Huang

Ana Cláudia Paiva-Santos

  • Quanchang Tan

Zi-Xiang Wu

  • Spyros Bakas

Yiannis N Manoussopoulos

  • Louise Olsbro Rosengaard

Mikkel Zola Andersen

  • Eur J Cardiovasc Nurs

Sabine Allida

  • Richard I. Lindley
  • Caleb Ferguson

Annette M Bourgault

  • Jean W. Davis

Jacqueline Lamanna

  • Palliat Support Care
  • Cremilson de Paula Silva
  • Claudia Wong

Ana Cláudia Mesquita Garcia

  • Edoardo Aromataris
  • Ornella Fiorillo-Moreno
  • Qingping Yun
  • Minqing Lin
  • Jinling Tang

Melissa Bond

  • George Siemens
  • Progr Palliat Care

Geovanna Maria Isidoro

  • Fábio de Souza Terra
  • Rukhsana Vincent
  • Salima Deewan
  • Parveen Ismat
  • Jane Vincent

Xiang-Hong Jing

  • María Alejandra Boada Fuentes
  • Raquel Elena Toncel Herrera
  • Andrea Leonor Wadnipar Gutierrez
  • Arunakshi Krishnan

Amrita Chawla

  • Ajay Logani
  • ECOL INFORM

Md. Abrar Istiak

  • Ajuolachi Nwoga
  • Wei-Juan Gang
  • Xiang-hong JING
  • Beauty Dhlamini

Purva Gulrandhe

  • Mageed Ghaleb

Sharareh Taghipour

  • Douglas G. Altman
  • Patricia Aluko
  • Camilla Young
  • Belinda von Niederhäusern

Gordon H Guyatt

  • Guy Tsafnat

Paul Glasziou

  • Julie De Meulemeester

Mark Fedyk

  • Lucas Jurkovic

Michel Shamy

  • George A. Wells

David Henry

  • Rohit Borah

Andrew W Brown

  • Bartosz Helfer

Aaron Prosser

  • Daniel M Pöpping

Nadia Elia

  • Mike Clarke
  • BMC HEALTH SERV RES

Juan Pablo Domecq

  • Julian Elliott

Tari Turner

  • Russell L Gruen

Diego R. Amancio

  • BMC MED RES METHODOL

Ashley P Jones

  • Anne-Sophie Jannot

Thomas Agoritsas

  • Thomas Perneger

Allison Jaure

  • Kate Flemming

Elizabeth McInnes

  • Katharine Ker

Phil Edwards

  • Ian Roberts
  • David Moher
  • Kenneth F Schulz

Douglas Altman

  • Sally Hopewell
  • Lain Chalmers
  • Penny Whiting

Jelena Savović

  • Julian P T Higgins

Rachel Churchill

  • Anja Engelking

Marija Cavar

  • I Wedel-Heinen
  • Fay Chinnery

Kelly Dunham

  • Evelyn P. Whitlock

Lauren A Maggio

  • Erik Driessen

Mona Nasser

  • John P. A. Ioannidis

Peter Juni

  • Linda Nartey

Stephan Reichenbach

  • S. Hopewell
  • Veronica I. Sawin
  • BRAIN BEHAV IMMUN
  • Rita Haapakoski
  • Julia Mathieu
  • Klaus P. Ebmeier
  • Mika Kivimäki

Jojanneke A Bastiaansen

  • Marcus R Munafò

Sabine Kleinert

  • Laura Benham

David R Collingridge

  • Viktor Bartanusz
  • MAYO CLIN PROC

Vinay Prasad

  • Caitlin Toomey

Adam Cifu

  • SOC STUD SCI
  • M.H. MacRoberts
  • B.R. MacRoberts
  • William R. Shadish
  • Donna Tolliver
  • Sunil K. Sen Gupta
  • HEALTH EXPECT

Ruth Jacqueline Stewart

  • Kathryn Oliver MSc

Sandy Oliver

  • J BONE JOINT SURG AM

Ujash Sheth

  • Mohit Bhandari

Ian Jude Saldanha

  • Naomi A McKoy

Matthew Schrag

  • Udochukwu Oyoyo

Wolff M Kirsch

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Banner

* Research Basics *

  • Introduction

So What Do We Mean By “Formal Research?”

  • Guide License
  • Types of Research
  • Secondary Research | Literature Review
  • Developing Your Topic
  • Using and Evaluating Sources
  • Ethics & Responsible Conduct of Research
  • More Information

Paul V. Galvin Library

a research based meaning

email: [email protected]

Chat with us:

Make a research appointment:, search our faq:.

Research is formalized curiosity. It is poking and prying with a purpose. - Zora Neale Hurston

A good working definition of research might be:

Research is the deliberate, purposeful, and systematic gathering of data, information, facts, and/or opinions for the advancement of personal, societal, or overall human knowledge.

Based on this definition, we all do research all the time. Most of this research is casual research. Asking friends what they think of different restaurants, looking up reviews of various products online, learning more about celebrities; these are all research.

Formal research includes the type of research most people think of when they hear the term “research”: scientists in white coats working in a fully equipped laboratory. But formal research is a much broader category that just this. Most people will never do laboratory research after graduating from college, but almost everybody will have to do some sort of formal research at some point in their careers.

Casual research is inward facing: it’s done to satisfy our own curiosity or meet our own needs, whether that’s choosing a reliable car or figuring out what to watch on TV. Formal research is outward facing. While it may satisfy our own curiosity, it’s primarily intended to be shared in order to achieve some purpose. That purpose could be anything: finding a cure for cancer, securing funding for a new business, improving some process at your workplace, proving the latest theory in quantum physics, or even just getting a good grade in your Humanities 200 class.

What sets formal research apart from casual research is the documentation of where you gathered your information from. This is done in the form of “citations” and “bibliographies.” Citing sources is covered in the section "Citing Your Sources."

Formal research also follows certain common patterns depending on what the research is trying to show or prove. These are covered in the section “Types of Research.”

Creative Commons License

  • Next: TL;DR >>
  • Last Updated: Jul 24, 2024 4:33 PM
  • URL: https://guides.library.iit.edu/research_basics

Logo of Dynaread Special Education Corporation.

Science-based, Research-based, Evidence-based: What's the difference?

By Hans Dekkers, Dynaread CEO/Founder.

In the field of Special Education, these terms surface for dyslexia treatment programs all the time. What do they stand for? This first table (directly below) will define them. The second table will demonstrate the breadth of interpretation.

Science-based - Parts or components of the program or method are based on Science.

Research-based - Parts or components of the program or method are based on practices demonstrated effective through Research.

Evidence-based - The entire program or method has been demonstrated through Research to be effective.

We want to point out that Evidence-based is not always fool-proof. There is no regulatory authority that guards the integrity of this term. If research establishes that a treatment group of children outperformed a control group by 1%, technically the researched method could claim and market them as "Evidence-based." And this happens. Further, and arguably more critically, evidence that a method helps five year old children with mild dyslexia by no means translates into valid evidence that the same method will help ten year old children with severe dyslexia. This misuse of 'evidence' is widespread.

There is more to say...

Example: Scientific evidence has demonstrated that wheels on axles are the preferred means to move a vehicle. On this premise, the following table will illustrate this point.

Science-based

horse and wagon

Research-based

bmw

Evidence-based

Both means of transportation move a vehicle from A to B. But the Horse and Wagon, in this example, is the only one researched as an effective means of getting from A to B, as the BMW concept car is a prototype. In this example (silly as it is), if you are strictly driven by the "Evidence-based" criteria, the Horse and Wagon would be your vehicle of choice.

What about Dynaread? Dynaread is thoroughly Science- and Research-based, but not (yet) Evidence-based.

Conclusion?

There is safety in the wisdom of the masses, in the forces of the market place, and in pursuing Evidence-based approaches. However, the claim Evidence-based is not always water tight, as the Horse and Wagon example illustrates. Further, following this approach too legalistically would limit you to methods which have been established for a number of years—whilst scientific insights grow and mature every month.

There is one tool that I have found highly reliable in many applications: From identifying reputable suppliers to identifying project or business partners. Namely: Focus on the people behind it. I had the privilege to work with a wonderful man who managed profound responsibilities for third parties. He did not pay a lot of attention to contracts. Instead, he focused on the character of the persons he was dealing with. For contracts with individuals of low integrity still get you into heavy weather, where weak contracts with people of integrity never give you problems.

Do you know who is behind the method you are considering for your child? Or are they hiding behind a contact form and a web site without names and faces? Do you smell passion? Do they care about children? How is their pricing? Is it reasonable, or too heavy to carry for most? Are they making wild claims: Do this, and have your child read in three months? In short: Do you smell the fragrance of integrity or of deception and marketing techniques?

At Dynaread we believe in what we have built: A dyslexia treatment program for older struggling readers. We strive to walk the higher road, and have research evidence behind all we do. We approach you—our prospect client—in full day light: You have our names, photos, backgrounds. Dynaread is not snake-oil: We have all the reasons and field experience to believe that it is an effective method to help older struggling readers: Our web site explains why. Actually—to return to our examples of means of transportation—we believe we have built a pretty neat car, and we are grateful for it. Talk to us! We love to hear from you.

share this page on facebook

Contribute with scientific and overall integrity . Retain the focus on the needs of each individual child .

DYNAREAD: Grounded in Reality

Photo of a soldier and his family.

Dynaread has been developed in the trenches of actual remediation, with our feet firmly planted on the ground. Scientific research is essential (and we consistently use it), but we also understand the realities at home and in school. Not all homes have two parents, not all Dad's or Mom's are always home, there is oftentimes no money, schools lack staff or funding. We listen, we observe, we discuss, and we build the best solutions we can for older (ages 7+) struggling readers.

Photo showing holding hands: Helping Children Together.

Dynaread Core Values Contribute with scientific and overall integrity ; Retain the focus on the needs of each individual child .

These companies actively support our efforts to obliterate adolescent illiteracy and low literacy

DYNAREAD SPECIAL EDUCATION CORPORATION 1670 Salem Road, Cranbrook, BC V1C6V3 Canada

Contact Us / Create a Support Ticket

Terms of Use — Privacy Policy

a research based meaning

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr.

Cover of Patient Safety and Quality

Patient Safety and Quality: An Evidence-Based Handbook for Nurses.

Chapter 7 the evidence for evidence-based practice implementation.

Marita G. Titler .

Affiliations

Overview of evidence-based practice.

Evidence-based health care practices are available for a number of conditions such as asthma, heart failure, and diabetes. However, these practices are not always implemented in care delivery, and variation in practices abound. 1–4 Traditionally, patient safety research has focused on data analyses to identify patient safety issues and to demonstrate that a new practice will lead to improved quality and patient safety. 5 Much less research attention has been paid to how to implement practices. Yet, only by putting into practice what is learned from research will care be made safer. 5 Implementing evidence-based safety practices are difficult and need strategies that address the complexity of systems of care, individual practitioners, senior leadership, and—ultimately—changing health care cultures to be evidence-based safety practice environments. 5

Nursing has a rich history of using research in practice, pioneered by Florence Nightingale. 6–9 Although during the early and mid-1900s, few nurses contributed to this foundation initiated by Nightingale, 10 the nursing profession has more recently provided major leadership for improving care through application of research findings in practice. 11

Evidence-based practice (EBP) is the conscientious and judicious use of current best evidence in conjunction with clinical expertise and patient values to guide health care decisions. 12–15 Best evidence includes empirical evidence from randomized controlled trials; evidence from other scientific methods such as descriptive and qualitative research; as well as use of information from case reports, scientific principles, and expert opinion. When enough research evidence is available, the practice should be guided by research evidence in conjunction with clinical expertise and patient values. In some cases, however, a sufficient research base may not be available, and health care decision making is derived principally from nonresearch evidence sources such as expert opinion and scientific principles. 16 As more research is done in a specific area, the research evidence must be incorporated into the EBP. 15

Models of Evidence-Based Practice

Multiple models of EBP are available and have been used in a variety of clinical settings. 16–36 Although review of these models is beyond the scope of this chapter, common elements of these models are selecting a practice topic (e.g., discharge instructions for individuals with heart failure), critique and syntheses of evidence, implementation, evaluation of the impact on patient care and provider performance, and consideration of the context/setting in which the practice is implemented. 15 , 17 The learning that occurs during the process of translating research into practice is valuable information to capture and feed back into the process, so that others can adapt the evidence-based guideline and/or the implementation strategies.

A recent conceptual framework for maximizing and accelerating the transfer of research results from the Agency for Healthcare Research and Quality (AHRQ) patient safety research portfolio to health care delivery was developed by the dissemination subcommittee of the AHRQ Patient Safety Research Coordinating Committee. 37 This model is a synthesis of concepts from scientific information on knowledge transfer, social marketing, social and organizational innovation, and behavior change (see Figure 1 ). 37 Although the framework is portrayed as a series of stages, the authors of this framework do not believe that the knowledge transfer process is linear; rather, activities occur simultaneously or in different sequences, with implementation of EBPs being a multifaceted process with many actors and systems.

AHRQ Model of Knowledge Transfer Adapted from Nieva, V., Murphy, R., Ridley, N., et al. Used with permission. http://www.ahrq.gov/qual/advances/

Steps of Evidence-Based Practice

Steps of promoting adoption of EBPs can be viewed from the perspective of those who conduct research or generate knowledge, 23 , 37 those who use the evidence-based information in practice, 16 , 31 and those who serve as boundary spanners to link knowledge generators with knowledge users. 19

Steps of knowledge transfer in the AHRQ model 37 represent three major stages: (1) knowledge creation and distillation, (2) diffusion and dissemination, and (3) organizational adoption and implementation. These stages of knowledge transfer are viewed through the lens of researchers/creators of new knowledge and begin with determining what findings from the patient safety portfolio or individual research projects ought to be disseminated.

Knowledge creation and distillation is conducting research (with expected variation in readiness for use in health care delivery systems) and then packaging relevant research findings into products that can be put into action—such as specific practice recommendations—thereby increasing the likelihood that research evidence will find its way into practice. 37 It is essential that the knowledge distillation process be informed and guided by end users for research findings to be implemented in care delivery. The criteria used in knowledge distillation should include perspectives of the end users (e.g., transportability to the real-world health care setting, feasibility, volume of evidence needed by health care organizations and clinicians), as well as traditional knowledge generation considerations (e.g., strength of the evidence, generalizability).

Diffusion and dissemination involves partnering with professional opinion leaders and health care organizations to disseminate knowledge that can form the basis of action (e.g., essential elements for discharge teaching for hospitalized patient with heart failure) to potential users. Dissemination partnerships link researchers with intermediaries that can function as knowledge brokers and connectors to the practitioners and health care delivery organizations. Intermediaries can be professional organizations such as the National Patient Safety Foundation or multidisciplinary knowledge transfer teams such as those that are effective in disseminating research-based cancer prevention programs. In this model, dissemination partnerships provide an authoritative seal of approval for new knowledge and help identify influential groups and communities that can create a demand for application of the evidence in practice. Both mass communication and targeted dissemination are used to reach audiences with the anticipation that early users will influence the latter adopters of the new usable, evidence-based research findings. Targeted dissemination efforts must use multifaceted dissemination strategies, with an emphasis on channels and media that are most effective for particular user segments (e.g., nurses, physicians, pharmacists).

End user adoption, implementation, and institutionalization is the final stage of the knowledge transfer process. 37 This stage focuses on getting organizations, teams, and individuals to adopt and consistently use evidence-based research findings and innovations in everyday practice. Implementing and sustaining EBPs in health care settings involves complex interrelationships among the EBP topic (e.g., reduction of medication errors), the organizational social system characteristics (such as operational structures and values, the external health care environment), and the individual clinicians. 35 , 37–39 A variety of strategies for implementation include using a change champion in the organization who can address potential implementation challenges, piloting/trying the change in a particular patient care area of the organization, and using multidisciplinary implementation teams to assist in the practical aspects of embedding innovations into ongoing organizational processes. 35 , 37 Changing practice takes considerable effort at both the individual and organizational level to apply evidence-based information and products in a particular context. 22 When improvements in care are demonstrated in the pilot studies and communicated to other relevant units in the organization, key personnel may then agree to fully adopt and sustain the change in practice. Once the EBP change is incorporated into the structure of the organization, the change is no longer considered an innovation but a standard of care. 22 , 37

In comparison, other models of EBP (e.g., Iowa Model of Evidence-based Practice to Promote Quality of Care 16 ) view the steps of the EBP process from the perspective of clinicians and/or organizational/clinical contexts of care delivery. When viewing steps of the EBP process through the lens of an end user, the process begins with selecting an area for improving care based on evidence (rather than asking what findings ought to be disseminated); determining the priority of the potential topic for the organization; formulating an EBP team composed of key stakeholders; finding, critiquing, and synthesizing the evidence; setting forth EBP recommendations, with the type and strength of evidence used to support each clearly documented; determining if the evidence findings are appropriate for use in practice; writing an EBP standard specific to the organization; piloting the change in practice; implementing changes in practice in other relevant practice areas (depending on the outcome of the pilot); evaluating the EBP changes; and transitioning ongoing quality improvement (QI) monitoring, staff education, and competency review of the EBP topic to appropriate organizational groups as defined by the organizational structure. 15 , 40 The work of EBP implementation from the perspective of the end user is greatly facilitated by efforts of AHRQ, professional nursing organizations (e.g., Oncology Nursing Society), and others that distill and package research findings into useful products and tools for use at the point of care delivery.

When the clinical questions of end users can be addressed through use of existing evidence that is packaged with end users in mind, steps of the EBP process take less time and more effort can be directed toward the implementation, evaluation, and sustainability components of the process. For example, finding, critiquing, and synthesizing the evidence; setting forth EBP recommendations with documentation of the type and strength of evidence for each recommendation; and determining appropriateness of the evidence for use in practice are accelerated when the knowledge-based information is readily available. Some distilled research findings also include quick reference guides that can be used at the point of care and/or integrated into health care information systems, which also helps with implementation. 41 , 42

Translation Science: An Overview

Translation science is the investigation of methods, interventions, and variables that influence adoption by individuals and organizations of EBPs to improve clinical and operational decisionmaking in health care. 35 , 43–46 This includes testing the effect of interventions on promoting and sustaining adoption of EBPs. Examples of translation studies include describing facilitators and barriers to knowledge uptake and use, organizational predictors of adherence to EBP guidelines, attitudes toward EBPs, and defining the structure of the scientific field. 11 , 47–49

Translation science must be guided by a conceptual model that organizes the strategies being tested, elucidates the extraneous variables (e.g., behaviors and facilitators) that may influence adoption of EBPs (e.g., organizational size, characteristics of users), and builds a scientific knowledge base for this field of inquiry. 15 , 50 Conceptual models used in the translating-research-into-practice studies funded by AHRQ were adult learning, health education, social influence, marketing, and organizational and behavior theories. 51 Investigators have used Rogers’s Diffusion of Innovation model, 35 , 39 , 52–55 the Promoting Action on Research Implementation in Health Services (PARIHS) model, 29 the push/pull framework, 23 , 56 , 57 the decisionmaking framework, 58 and the Institute for Healthcare Improvement (IHI) model 59 in translation science.

Study findings regarding evidence-based practices in a diversity of health care settings are building an empirical foundation of translation science. 19 , 43 , 51 , 60–83 These investigations and others 18 , 84–86 provide initial scientific knowledge to guide us in how to best promote use of evidence in practice. To advance knowledge about promoting and sustaining adoption of EBPs in health care, translation science needs more studies that test translating research into practice (TRIP) interventions: studies that investigate what TRIP interventions work, for whom, in what circumstances, in what types of settings; and studies that explain the underlying mechanisms of effective TRIP interventions. 35 , 49 , 79 , 87 Partnership models, which encourage ongoing interaction between researchers and practitioners, may be the way forward to carry out such studies. 56 Challenges, issues, methods, and instruments used in translation research are described elsewhere. 11 , 19 , 49 , 78 , 88–97

  • Research Evidence

What Is Known About Implementing Evidence-Based Practices?

Multifaceted implementation strategies are needed to promote use of research evidence in clinical and administrative health care decisionmaking. 15 , 22 , 37 , 45 , 64 , 72 , 77 , 79 , 98 , 99 Although Grimshaw and colleagues 65 suggest that multifaceted interventions are no more effective than single interventions, context (site of care delivery) was not incorporated in the synthesis methodology. As noted by others, the same TRIP intervention may meet with varying degrees of effectiveness when applied in different contexts. 35 , 49 , 79 , 80 , 87 , 100 , 101 Implementation strategies also need to address both the individual practitioner and organizational perspective. 15 , 22 , 37 , 64 , 72 , 77 , 79 , 98 When practitioners decide individually what evidence to use in practice, considerable variability in practice patterns result, 71 potentially resulting in adverse patient outcomes.

For example, an “individual” perspective of EBP would leave the decision about use of evidence-based endotracheal suctioning techniques to each nurse and respiratory therapist. Some individuals may be familiar with the research findings for endotracheal suctioning while others may not. This is likely to result in different and conflicting practices being used as people change shifts every 8 to 12 hours. From an organizational perspective, endotracheal suctioning policies and procedures based on research are written, the evidence-based information is integrated into the clinical information systems, and adoption of these practices by nurses and other practitioners is systematically promoted in the organization. This includes assuring that practitioners have the necessary knowledge, skills, and equipment to carry out the evidence-based endotracheal suctioning practice. The organizational governance supports use of these practices through various councils and committees such as the Practice Committee, Staff Education Committee, and interdisciplinary EBP work groups.

The Translation Research Model, 35 built on Rogers’s seminal work on diffusion of innovations, 39 provides a guiding framework for testing and selecting strategies to promote adoption of EBPs. According to the Translation Research Model, adoption of innovations such as EBPs are influenced by the nature of the innovation (e.g., the type and strength of evidence, the clinical topic) and the manner in which it is communicated (disseminated) to members (nurses) of a social system (organization, nursing profession). 35 Strategies for promoting adoption of EBPs must address these four areas (nature of the EBP topic; users of the evidence; communication; social system) within a context of participative change (see Figure 2 ). This model provided the framework for a multisite study that tested the effectiveness of a multifaceted TRIP intervention designed to promote adoption of evidence-based acute pain management practices for hospitalized older adults. The intervention improved the quality of acute pain management practices and reduced costs. 81 The model is currently being used to test the effectiveness of a multifaceted TRIP intervention to promote evidence-based cancer pain management of older adults in home hospice settings. * This guiding framework is used herein to overview what is known about implementation interventions to promote use of EBPs in health care systems (see Evidence Table ).

*Implementation Model Redrawn from Rogers EM. Diffusion of innovations. 5th ed. New York: The Free Press; 2003; Titler MG, Everett LQ. Translating research into practice: considerations for critical care investigators. Crit Care Nurs Clin North Am 2001a;13(4):587-604. (more...)

Evidence Table

Evidence Table

Evidence-Based Practice in Nursing

Nature of the Innovation or Evidence-Based Practice

Characteristics of an innovation or EBP topic that affect adoption include the relative advantage of the EBP (e.g., effectiveness, relevance to the task, social prestige); the compatibility with values, norms, work, and perceived needs of users; and complexity of the EBP topic. 39 For example, EBP topics that are perceived by users as relatively simple (e.g., influenza vaccines for older adults) are more easily adopted in less time than those that are more complex (acute pain management for hospitalized older adults). Strategies to promote adoption of EBPs related to characteristics of the topic include practitioner review and “reinvention” of the EBP guideline to fit the local context, use of quick reference guides and decision aids, and use of clinical reminders. 53 , 59 , 60 , 65 , 74 , 82 , 102–107 An important principle to remember when planning implementation of an EBP is that the attributes of the EBP topic as perceived by users and stakeholders (e.g., ease of use, valued part of practice) are neither stable features nor sure determinants of their adoption. Rather it is the interaction among the characteristics of the EBP topic, the intended users, and a particular context of practice that determines the rate and extent of adoption. 22 , 35 , 39

Studies suggest that clinical systems, computerized decision support, and prompts that support practice (e.g., decisionmaking algorithms, paper reminders) have a positive effect on aligning practices with the evidence base. 15 , 51 , 65 , 74 , 80 , 82 , 102 , 104 , 107–110 Computerized knowledge management has consistently demonstrated significant improvements in provider performance and patient outcomes. 82 Feldman and colleagues, using a just-in-time e-mail reminder in home health care, have demonstrated (1) improvements in evidence-based care and outcomes for patients with heart failure, 64 , 77 and (2) reduced pain intensity for cancer patients. 75 Clinical information systems should deploy the evidence base to the point of care and incorporate computer decision-support software that integrates evidence for use in clinical decisionmaking about individual patients. 40 , 104 , 111–114 There is still much to learn about the “best” manner of deploying evidence-based information through electronic clinical information systems to support evidence-based care. 115

Methods of Communication

Interpersonal communication channels, methods of communication, and influence among social networks of users affect adoption of EBPs. 39 Use of mass media, opinion leaders, change champions, and consultation by experts along with education are among strategies tested to promote use of EBPs. Education is necessary but not sufficient to change practice, and didactic continuing education alone does little to change practice behavior. 61 , 116 There is little evidence that interprofessional education as compared to discipline-specific education improves EBP. 117 Interactive education, used in combination with other practice-reinforcing strategies, has more positive effects on improving EBP than didactic education alone. 66 , 68 , 71 , 74 , 118 , 119 There is evidence that mass media messages (e.g., television, radio, newspapers, leaflets, posters and pamphlets), targeted at the health care consumer population, have some effect on use of health services for the targeted behavior (e.g., colorectal cancer screening). However, little empirical evidence is available to guide framing of messages communicated through planned mass media campaigns to achieve the intended change. 120

Several studies have demonstrated that opinion leaders are effective in changing behaviors of health care practitioners, 22 , 68 , 79 , 100 , 116 , 121–123 especially in combination with educational outreach or performance feedback. Opinion leaders are from the local peer group, viewed as a respected source of influence, considered by associates as technically competent, and trusted to judge the fit between the innovation and the local situation. 39 , 116 , 121 , 124–127 With their wide sphere of influence across several microsystems/units, opinion leaders’ use of the innovation influences peers and alters group norms. 39 , 128 The key characteristic of an opinion leader is that he or she is trusted to evaluate new information in the context of group norms. Opinion leadership is multifaceted and complex, with role functions varying by the circumstances, but few successful projects to implement innovations in organizations have managed without the input of identifiable opinion leaders. 22 , 35 , 39 , 81 , 96 Social interactions such as “hallway chats,” one-on-one discussions, and addressing questions are important, yet often overlooked components of translation. 39 , 59 Thus, having local opinion leaders discuss the EBPs with members of their peer group is necessary to translate research into practice. If the EBP that is being implemented is interdisciplinary in nature, discipline-specific opinion leaders should be used to promote the change in practice. 39

Change champions are also helpful for implementing innovations. 39 , 49 , 81 , 129–131 They are practitioners within the local group setting (e.g., clinic, patient care unit) who are expert clinicians, passionate about the innovation, committed to improving quality of care, and have a positive working relationship with other health care professionals. 39 , 125 , 131 , 132 They circulate information, encourage peers to adopt the innovation, arrange demonstrations, and orient staff to the innovation. 49 , 130 The change champion believes in an idea; will not take “no” for an answer; is undaunted by insults and rebuffs; and, above all, persists. 133 Because nurses prefer interpersonal contact and communication with colleagues rather than Internet or traditional sources of practice knowledge, 134–137 it is imperative that one or two change champions be identified for each patient care unit or clinic where the change is being made for EBPs to be enacted by direct care providers. 81 , 138 Conferencing with opinion leaders and change champions periodically during implementation is helpful to address questions and provide guidance as needed. 35 , 66 , 81 , 106

Because nurses’ preferred information source is through peers and social interactions, 134–137 , 139 , 140 using a core group in conjunction with change champions is also helpful for implementing the practice change. 16 , 110 , 141 A core group is a select group of practitioners with the mutual goal of disseminating information regarding a practice change and facilitating the change by other staff in their unit/microsystem. 142 Core group members represent various shifts and days of the week and become knowledgeable about the scientific basis for the practice; the change champion educates and assists them in using practices that are aligned with the evidence. Each member of the core group, in turn, takes the responsibility for imparting evidence-based information and effecting practice change with two or three of their peers. Members assist the change champion and opinion leader with disseminating the EBP information to other staff, reinforce the practice change on a daily basis, and provide positive feedback to those who align their practice with the evidence base. 15 Using a core-group approach in conjunction with a change champion results in a critical mass of practitioners promoting adoption of the EBP. 39

Educational outreach, also known as academic detailing, promotes positive changes in practice behaviors of nurses and physicians. 22 , 64 , 66 , 71 , 74 , 75 , 77 , 81 , 119 , 143 Academic detailing is done by a topic expert, knowledgeable of the research base (e.g., cancer pain management), who may be external to the practice setting; he or she meets one-on-one with practitioners in their setting to provide information about the EBP topic. These individuals are able to explain the research base for the EBPs to others and are able to respond convincingly to challenges and debates. 22 This strategy may include providing feedback on provider or team performance with respect to selected EBP indicators (e.g., frequency of pain assessment). 66 , 81 , 119

Users of the Innovation or Evidence-Based Practice

Members of a social system (e.g., nurses, physicians, clerical staff) influence how quickly and widely EBPs are adopted. 39 Audit and feedback, performance gap assessment (PGA), and trying the EBP are strategies that have been tested. 15 , 22 , 65 , 66 , 70–72 , 81 , 98 , 124 , 144 PGA and audit and feedback have consistently shown a positive effect on changing practice behavior of providers. 65 , 66 , 70 , 72 , 81 , 98 , 124 , 144 , 145 PGA (baseline practice performance) informs members, at the beginning of change, about a practice performance and opportunities for improvement. Specific practice indicators selected for PGA are related to the practices that are the focus of evidence-based practice change, such as every-4-hour pain assessment for acute pain management. 15 , 66 , 81

Auditing and feedback are ongoing processes of using and assessing performance indicators (e.g., every-4-hour pain assessment), aggregating data into reports, and discussing the findings with practitioners during the practice change. 22 , 49 , 66 , 70 , 72 , 81 , 98 , 145 This strategy helps staff know and see how their efforts to improve care and patient outcomes are progressing throughout the implementation process. Although there is no clear empirical evidence for how to provide audit and feedback, 70 , 146 effects may be larger when clinicians are active participants in implementing change and discuss the data rather than being passive recipients of feedback reports. 67 , 70 Qualitative studies provide some insight into use of audit and feedback. 60 , 67 One study on use of data feedback for improving treatment of acute myocardial infarction found that (1) feedback data must be perceived by physicians as important and valid, (2) the data source and timeliness of data feedback are critical to perceived validity, (3) time is required to establish credibility of data within a hospital, (4) benchmarking improves the validity of the data feedback, and (5) physician leaders can enhance the effectiveness of data feedback. Data feedback that profiles an individual physician’s practices can be effective but may be perceived as punitive; data feedback must persist to sustain improved performance; and effectiveness of data feedback is intertwined with the organizational context, including physician leadership and organizational culture. 60 Hysong and colleagues 67 found that high-performing institutions provided timely, individualized, nonpunitive feedback to providers, whereas low performers were more variable in their timeliness and nonpunitiveness and relied more on standardized, facility-level reports. The concept of useful feedback emerged as the core concept around which timeliness, individualization, nonpunitiveness, and customizability are important.

Users of an innovation usually try it for a period of time before adopting it in their practice. 22 , 39 , 147 When “trying an EBP” (piloting the change) is incorporated as part of the implementation process, users have an opportunity to use it for a period of time, provide feedback to those in charge of implementation, and modify the practice if necessary. 148 Piloting the EBP as part of implementation has a positive influence on the extent of adoption of the new practice. 22 , 39 , 148

Characteristics of users such as educational preparation, practice specialty, and views on innovativeness may influence adoption of an EBP, although findings are equivocal. 27 , 39 , 130 , 149–153 Nurses’ disposition to critical thinking is, however, positively correlated with research use, 154 and those in clinical educator roles are more likely to use research than staff nurses or nurse managers. 155

Social System

Clearly, the social system or context of care delivery matters when implementing EBPs. 2 , 30 , 33 , 39 , 60 , 84 , 85 , 91 , 92 , 101 , 156–163 For example, investigators demonstrated the effectiveness of a prompted voiding intervention for urinary incontinence in nursing homes, but sustaining the intervention in day-to-day practice was limited when the responsibility of carrying out the intervention was shifted to nursing home staff (rather than the investigative team) and required staffing levels in excess of a majority of nursing home settings. 164 This illustrates the importance of embedding interventions into ongoing processes of care.

Several organizational factors affect adoption of EBPs. 22 , 39 , 79 , 134 , 165–167 Vaughn and colleagues 101 demonstrated that organizational resources, physician full-time employees (FTEs) per 1,000 patient visits, organizational size, and whether the facility was located in or near a city affected use of evidence in the health care system of the Department of Veterans Affairs (VA). Large, mature, functionally differentiated organizations (e.g., divided into semiautonomous departments and units) that are specialized, with a focus of professional knowledge, slack resources to channel into new projects, decentralized decisionmaking, and low levels of formalization will more readily adopt innovations such as new practices based on evidence. Larger organizations are generally more innovative because size increases the likelihood that other predictors of innovation adoption—such as slack financial and human resources and differentiation—will be present. However, these organizational determinants account for only about 15 percent of the variation in innovation adoption between comparable organizations. 22 Adler and colleagues 168 hypothesize that while more structurally complex organizations may be more innovative and hence adopt EBPs relatively early, less structurally complex organizations may be able to diffuse EBPs more effectively. Establishing semiautonomous teams is associated with successful implementation of EBPs, and thus should be considered in managing organizational units. 168–170

As part of the work of implementing EBPs, it is important that the social system—unit, service line, or clinic—ensures that policies, procedures, standards, clinical pathways, and documentation systems support the use of the EBPs. 49 , 68 , 72 , 73 , 103 , 140 , 171 Documentation forms or clinical information systems may need revision to support changes in practice; documentation systems that fail to readily support the new practice thwart change. 82

Absorptive capacity for new knowledge is another social system factor that affects adoption of EBPs. Absorptive capacity is the knowledge and skills to enact the EBPs; the strength of evidence alone will not promote adoption. An organization that is able to systematically identify, capture, interpret, share, reframe, and recodify new knowledge, and put it to appropriate use, will be better able to assimilate EBPs. 82 , 103 , 172 , 173 A learning organizational culture and proactive leadership that promotes knowledge sharing are important components of building absorptive capacity for new knowledge. 66 , 139 , 142 , 174 Components of a receptive context for EBP include strong leadership, clear strategic vision, good managerial relations, visionary staff in key positions, a climate conducive to experimentation and risk taking, and effective data capture systems. Leadership is critical in encouraging organizational members to break out of the convergent thinking and routines that are the norm in large, well-established organizations. 4 , 22 , 39 , 122 , 148 , 163 , 175

An organization may be generally amenable to innovations but not ready or willing to assimilate a particular EBP. Elements of system readiness include tension for change, EBP-system fit, assessment of implications, support and advocacy for the EBP, dedicated time and resources, and capacity to evaluate the impact of the EBP during and following implementation. If there is tension around specific work or clinical issues and staff perceive that the situation is intolerable, a potential EBP is likely to be assimilated if it can successfully address the issues, and thereby reduce the tension. 22 , 175

Assessing and structuring workflow to fit with a potential EBP is an important component of fostering adoption. If implications of the EBP are fully assessed, anticipated, and planned for, the practice is more likely to be adopted. 148 , 162 , 176 If supporters for a specific EBP outnumber and are more strategically placed within the organizational power base than opponents, the EBP is more likely to be adopted by the organization. 60 , 175 Organizations that have the capacity to evaluate the impact of the EBP change are more likely to assimilate it. Effective implementation needs both a receptive climate and a good fit between the EBP and intended adopters’ needs and values. 22 , 60 , 140 , 175 , 177

Leadership support is critical for promoting use of EBPs. 33 , 59 , 72 , 85 , 98 , 122 , 178–181 This support, which is expressed verbally, provides necessary resources, materials, and time to fulfill assigned responsibilities. 148 , 171 , 182 , 183 Senior leaders need to create an organizational mission, vision, and strategic plan that incorporate EBP; implement performance expectations for staff that include EBP work; integrate the work of EBP into the governance structure of the health care system; demonstrate the value of EBPs through administrative behaviors; and establish explicit expectations that nurse leaders will create microsystems that value and support clinical inquiry. 122 , 183 , 184

A recent review of organizational interventions to implement EBPs for improving patient care examined five major aspects of patient care. The review suggests that revision of professional roles (changing responsibilities and work of health professionals such as expanding roles of nurses and pharmacists) improved processes of care, but it was less clear about the effect on improvement of patient outcomes. Multidisciplinary teams (collaborative practice teams of physicians, nurses, and allied health professionals) treating mostly patients with prevalent chronic diseases resulted in improved patient outcomes. Integrated care services (e.g., disease management and case management) resulted in improved patient outcomes and cost savings. Interventions aimed at knowledge management (principally via use of technology to support patient care) resulted in improved adherence to EBPs and patient outcomes. The last aspect, quality management, had the fewest reviews available, with the results uncertain. A number of organizational interventions were not included in this review (e.g., leadership, process redesign, organizational learning), and the authors note that the lack of a widely accepted taxonomy of organizational interventions is a problem in examining effectiveness across studies. 82

An organizational intervention that is receiving increasing attention is tailored interventions to overcome barriers to change. 162 , 175 , 185 This type of intervention focuses on first assessing needs in terms of what is causing the gap between current practice and EBP for a specified topic, what behaviors and/or mechanism need to change, what organizational units and persons should be involved, and identification of ways to facilitate the changes. This information is then used in tailoring an intervention for the setting that will promote use of the specified EBP. Based on a recent systematic review, effectiveness of tailored implementation interventions remains uncertain. 185

In summary, making an evidence-based change in practice involves a series of action steps and a complex, nonlinear process. Implementing the change will take several weeks to months, depending on the nature of the practice change. Increasing staff knowledge about a specific EBP and passive dissemination strategies are not likely to work, particularly in complex health care settings. Strategies that seem to have a positive effect on promoting use of EBPs include audit and feedback, use of clinical reminders and practice prompts, opinion leaders, change champions, interactive education, mass media, educational outreach/academic detailing, and characteristics of the context of care delivery (e.g., leadership, learning, questioning). It is important that senior leadership and those leading EBP improvements are aware of change as a process and continue to encourage and teach peers about the change in practice. The new practice must be continually reinforced and sustained or the practice change will be intermittent and soon fade, allowing more traditional methods of care to return. 15

  • Practice Implications From Translation Science

Principles of Evidence-Based Practice for Patient Safety

Several translation science principles are informative for implementing patient safety initiatives:

  • First, consider the context and engage health care personnel who are at the point of care in selecting and prioritizing patient safety initiatives, clearly communicating the evidence base (strength and type) for the patient safety practice topic(s) and the conditions or setting to which it applies. These communication messages need to be carefully designed and targeted to each stakeholder user group.
  • Second, illustrate, through qualitative or quantitative data (e.g., near misses, sentinel events, adverse events, injuries from adverse events), the reason the organization and individuals within the organization should commit to an evidence-based safety practice topic. Clinicians tend to be more engaged in adopting patient safety initiatives when they understand the evidence base of the practice, in contrast to administrators saying, “We must do this because it is an external regulatory requirement.” For example, it is critical to converse with busy clinicians about the evidence-based rationale for doing fall-risk assessment, and to help them understand that fall-risk assessment is an external regulatory agency expectation because the strength of the evidence supports this patient safety practice.
  • Third, didactic education alone is never enough to change practice; one-time education on a specific safety initiative is not enough. Simply improving knowledge does not necessarily improve practice. Rather, organizations must invest in the tools and skills needed to create a culture of evidence-based patient safety practices where questions are encouraged and systems are created to make it easy to do the right thing.
  • Fourth, the context of EBP improvements in patient safety need to be addressed at each step of the implementation process; piloting the change in practice is essential to determine the fit between the EBP patient safety information/innovation and the setting of care delivery. There is no one way to implement, and what works in one agency may need modification to fit the organizational culture of another context.
  • Finally, it is important to evaluate the processes and outcomes of implementation. Users and stakeholders need to know that the efforts to improve patient safety have a positive impact on quality of care. For example, if a new barcoding system is being used to administer blood products, it is imperative to know that the steps in the process are being followed (process indicators) and that the change in practice is resulting in fewer blood product transfusion errors (outcome indicators).

Research Implications

Translation science is young, and although there is a growing body of knowledge in this area, we have, to date, many unanswered questions. These include the type of audit and feedback (e.g., frequency, content, format) strategies that are most effective, the characteristics of opinion leaders that are critical for success, the role of specific context variables, and the combination of strategies that are most effective. We also know very little about use of tailored implementation interventions, or the key context attributes to assess and use in developing and testing tailored interventions. The types of clinical reminders that are most effective for making EBP knowledge available at the point of care require further empirical explanation. We also know very little about the intensity and intervention dose of single and multifaceted strategies that are effective for promoting and sustaining use of EBPs or how the effectiveness differs by type of topic (e.g., simple versus complex). Only recently has the context of care delivery been acknowledged as affecting use of evidence, and further empirical work is needed in this area to understand how complex adaptive systems of practice incorporate knowledge acquisition and use. Lastly, we do not know what strategies or combination of strategies work for whom, in what context, why they work in some settings or cases and not others, and what is the mechanism by which these strategies or combination of strategies work.

This is an exciting area of investigation that has a direct impact on implementing patient safety practices. In planning investigations, researchers must use a conceptual model to guide the research and add to the empirical and theoretical understanding of this field of inquiry. Additionally, funding is needed for implementation studies that focus on evidence-based patient safety practices as the topic of concern. To generalize empirical findings from patient safety implementation studies, we must have a better understanding of what implementation strategies work, with whom, and in what types of settings, and we must investigate the underlying mechanisms of these strategies. This is likely to require mixed methods, a better understanding of complexity science, and greater appreciation for nontraditional methods and realistic inquiry. 87

Although the science of translating research into practice is fairly new, there is some guiding evidence of what implementation interventions to use in promoting patient safety practices. However, there is no magic bullet for translating what is known from research into practice. To move evidence-based interventions into practice, several strategies may be needed. Additionally, what works in one context of care may or may not work in another setting, thereby suggesting that context variables matter in implementation. 80

  • Search Strategy

Several electronic databases were searched (MEDLINE ® , CINAHL ® , PubMed ® ) using terms of evidence-based practice research, implementation research, and patient safety. (The terms “quality improvement” or “quality improvement intervention research” were not used.) The Cochrane Collaboration–Cochrane Reviews was also searched to look for systematic reviews of specific implementation strategies, and the Journal of Implementation Science was also reviewed. I also requested the final reports of the TRIP I and TRIP II studies funded by AHRQ. Classic articles known to the author were also included in this chapter (e.g.,Locock et al. 123 ).

*Principal Investigator: Keela Herr (R01 grant no. CA115363-01; National Cancer Institute (NCI))Background

  • Cite this Page Titler MG. The Evidence for Evidence-Based Practice Implementation. In: Hughes RG, editor. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Apr. Chapter 7.
  • PDF version of this page (470K)

In this Page

Other titles in this collection.

  • Advances in Patient Safety

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • It Is Not That Simple nor Compelling! Comment on "Translating Evidence Into Healthcare Policy and Practice: Single Versus Multi-faceted Implementation Strategies - Is There a Simple Answer to a Complex Question?". [Int J Health Policy Manag. 2015] It Is Not That Simple nor Compelling! Comment on "Translating Evidence Into Healthcare Policy and Practice: Single Versus Multi-faceted Implementation Strategies - Is There a Simple Answer to a Complex Question?". Bucknall T, Fossum M. Int J Health Policy Manag. 2015 Jul 28; 4(11):787-8. Epub 2015 Jul 28.
  • Nursing implementation science: how evidence-based nursing requires evidence-based implementation. [J Nurs Scholarsh. 2008] Nursing implementation science: how evidence-based nursing requires evidence-based implementation. van Achterberg T, Schoonhoven L, Grol R. J Nurs Scholarsh. 2008; 40(4):302-10.
  • Review The science of implementation: changing the practice of critical care. [Curr Opin Crit Care. 2008] Review The science of implementation: changing the practice of critical care. Weinert CR, Mann HJ. Curr Opin Crit Care. 2008 Aug; 14(4):460-5.
  • Review Integrative review of implementation strategies for translation of research-based evidence by nurses. [Clin Nurse Spec. 2014] Review Integrative review of implementation strategies for translation of research-based evidence by nurses. Wuchner SS. Clin Nurse Spec. 2014 Jul-Aug; 28(4):214-23.
  • Review A realist analysis of hospital patient safety in Wales: applied learning for alternative contexts from a multisite case study [ 2015] Review A realist analysis of hospital patient safety in Wales: applied learning for alternative contexts from a multisite case study Herepath A, Kitchener M, Waring J. 2015 Sep

Recent Activity

  • The Evidence for Evidence-Based Practice Implementation - Patient Safety and Qua... The Evidence for Evidence-Based Practice Implementation - Patient Safety and Quality

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

a research based meaning

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved August 27, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

What Is Research, and Why Do People Do It?

  • Open Access
  • First Online: 03 December 2022

Cite this chapter

You have full access to this open access chapter

a research based meaning

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  

Part of the book series: Research in Mathematics Education ((RME))

22k Accesses

Abstractspiepr Abs1

Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain, and by its commitment to learn from everyone else seriously engaged in research. We call this kind of research scientific inquiry and define it as “formulating, testing, and revising hypotheses.” By “hypotheses” we do not mean the hypotheses you encounter in statistics courses. We mean predictions about what you expect to find and rationales for why you made these predictions. Throughout this and the remaining chapters we make clear that the process of scientific inquiry applies to all kinds of research studies and data, both qualitative and quantitative.

You have full access to this open access chapter,  Download chapter PDF

Part I. What Is Research?

Have you ever studied something carefully because you wanted to know more about it? Maybe you wanted to know more about your grandmother’s life when she was younger so you asked her to tell you stories from her childhood, or maybe you wanted to know more about a fertilizer you were about to use in your garden so you read the ingredients on the package and looked them up online. According to the dictionary definition, you were doing research.

Recall your high school assignments asking you to “research” a topic. The assignment likely included consulting a variety of sources that discussed the topic, perhaps including some “original” sources. Often, the teacher referred to your product as a “research paper.”

Were you conducting research when you interviewed your grandmother or wrote high school papers reviewing a particular topic? Our view is that you were engaged in part of the research process, but only a small part. In this book, we reserve the word “research” for what it means in the scientific world, that is, for scientific research or, more pointedly, for scientific inquiry .

Exercise 1.1

Before you read any further, write a definition of what you think scientific inquiry is. Keep it short—Two to three sentences. You will periodically update this definition as you read this chapter and the remainder of the book.

This book is about scientific inquiry—what it is and how to do it. For starters, scientific inquiry is a process, a particular way of finding out about something that involves a number of phases. Each phase of the process constitutes one aspect of scientific inquiry. You are doing scientific inquiry as you engage in each phase, but you have not done scientific inquiry until you complete the full process. Each phase is necessary but not sufficient.

In this chapter, we set the stage by defining scientific inquiry—describing what it is and what it is not—and by discussing what it is good for and why people do it. The remaining chapters build directly on the ideas presented in this chapter.

A first thing to know is that scientific inquiry is not all or nothing. “Scientificness” is a continuum. Inquiries can be more scientific or less scientific. What makes an inquiry more scientific? You might be surprised there is no universally agreed upon answer to this question. None of the descriptors we know of are sufficient by themselves to define scientific inquiry. But all of them give you a way of thinking about some aspects of the process of scientific inquiry. Each one gives you different insights.

An image of the book's description with the words like research, science, and inquiry and what the word research meant in the scientific world.

Exercise 1.2

As you read about each descriptor below, think about what would make an inquiry more or less scientific. If you think a descriptor is important, use it to revise your definition of scientific inquiry.

Creating an Image of Scientific Inquiry

We will present three descriptors of scientific inquiry. Each provides a different perspective and emphasizes a different aspect of scientific inquiry. We will draw on all three descriptors to compose our definition of scientific inquiry.

Descriptor 1. Experience Carefully Planned in Advance

Sir Ronald Fisher, often called the father of modern statistical design, once referred to research as “experience carefully planned in advance” (1935, p. 8). He said that humans are always learning from experience, from interacting with the world around them. Usually, this learning is haphazard rather than the result of a deliberate process carried out over an extended period of time. Research, Fisher said, was learning from experience, but experience carefully planned in advance.

This phrase can be fully appreciated by looking at each word. The fact that scientific inquiry is based on experience means that it is based on interacting with the world. These interactions could be thought of as the stuff of scientific inquiry. In addition, it is not just any experience that counts. The experience must be carefully planned . The interactions with the world must be conducted with an explicit, describable purpose, and steps must be taken to make the intended learning as likely as possible. This planning is an integral part of scientific inquiry; it is not just a preparation phase. It is one of the things that distinguishes scientific inquiry from many everyday learning experiences. Finally, these steps must be taken beforehand and the purpose of the inquiry must be articulated in advance of the experience. Clearly, scientific inquiry does not happen by accident, by just stumbling into something. Stumbling into something unexpected and interesting can happen while engaged in scientific inquiry, but learning does not depend on it and serendipity does not make the inquiry scientific.

Descriptor 2. Observing Something and Trying to Explain Why It Is the Way It Is

When we were writing this chapter and googled “scientific inquiry,” the first entry was: “Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work.” The emphasis is on studying, or observing, and then explaining . This descriptor takes the image of scientific inquiry beyond carefully planned experience and includes explaining what was experienced.

According to the Merriam-Webster dictionary, “explain” means “(a) to make known, (b) to make plain or understandable, (c) to give the reason or cause of, and (d) to show the logical development or relations of” (Merriam-Webster, n.d. ). We will use all these definitions. Taken together, they suggest that to explain an observation means to understand it by finding reasons (or causes) for why it is as it is. In this sense of scientific inquiry, the following are synonyms: explaining why, understanding why, and reasoning about causes and effects. Our image of scientific inquiry now includes planning, observing, and explaining why.

An image represents the observation required in the scientific inquiry including planning and explaining.

We need to add a final note about this descriptor. We have phrased it in a way that suggests “observing something” means you are observing something in real time—observing the way things are or the way things are changing. This is often true. But, observing could mean observing data that already have been collected, maybe by someone else making the original observations (e.g., secondary analysis of NAEP data or analysis of existing video recordings of classroom instruction). We will address secondary analyses more fully in Chap. 4 . For now, what is important is that the process requires explaining why the data look like they do.

We must note that for us, the term “data” is not limited to numerical or quantitative data such as test scores. Data can also take many nonquantitative forms, including written survey responses, interview transcripts, journal entries, video recordings of students, teachers, and classrooms, text messages, and so forth.

An image represents the data explanation as it is not limited and takes numerous non-quantitative forms including an interview, journal entries, etc.

Exercise 1.3

What are the implications of the statement that just “observing” is not enough to count as scientific inquiry? Does this mean that a detailed description of a phenomenon is not scientific inquiry?

Find sources that define research in education that differ with our position, that say description alone, without explanation, counts as scientific research. Identify the precise points where the opinions differ. What are the best arguments for each of the positions? Which do you prefer? Why?

Descriptor 3. Updating Everyone’s Thinking in Response to More and Better Information

This descriptor focuses on a third aspect of scientific inquiry: updating and advancing the field’s understanding of phenomena that are investigated. This descriptor foregrounds a powerful characteristic of scientific inquiry: the reliability (or trustworthiness) of what is learned and the ultimate inevitability of this learning to advance human understanding of phenomena. Humans might choose not to learn from scientific inquiry, but history suggests that scientific inquiry always has the potential to advance understanding and that, eventually, humans take advantage of these new understandings.

Before exploring these bold claims a bit further, note that this descriptor uses “information” in the same way the previous two descriptors used “experience” and “observations.” These are the stuff of scientific inquiry and we will use them often, sometimes interchangeably. Frequently, we will use the term “data” to stand for all these terms.

An overriding goal of scientific inquiry is for everyone to learn from what one scientist does. Much of this book is about the methods you need to use so others have faith in what you report and can learn the same things you learned. This aspect of scientific inquiry has many implications.

One implication is that scientific inquiry is not a private practice. It is a public practice available for others to see and learn from. Notice how different this is from everyday learning. When you happen to learn something from your everyday experience, often only you gain from the experience. The fact that research is a public practice means it is also a social one. It is best conducted by interacting with others along the way: soliciting feedback at each phase, taking opportunities to present work-in-progress, and benefitting from the advice of others.

A second implication is that you, as the researcher, must be committed to sharing what you are doing and what you are learning in an open and transparent way. This allows all phases of your work to be scrutinized and critiqued. This is what gives your work credibility. The reliability or trustworthiness of your findings depends on your colleagues recognizing that you have used all appropriate methods to maximize the chances that your claims are justified by the data.

A third implication of viewing scientific inquiry as a collective enterprise is the reverse of the second—you must be committed to receiving comments from others. You must treat your colleagues as fair and honest critics even though it might sometimes feel otherwise. You must appreciate their job, which is to remain skeptical while scrutinizing what you have done in considerable detail. To provide the best help to you, they must remain skeptical about your conclusions (when, for example, the data are difficult for them to interpret) until you offer a convincing logical argument based on the information you share. A rather harsh but good-to-remember statement of the role of your friendly critics was voiced by Karl Popper, a well-known twentieth century philosopher of science: “. . . if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can” (Popper, 1968, p. 27).

A final implication of this third descriptor is that, as someone engaged in scientific inquiry, you have no choice but to update your thinking when the data support a different conclusion. This applies to your own data as well as to those of others. When data clearly point to a specific claim, even one that is quite different than you expected, you must reconsider your position. If the outcome is replicated multiple times, you need to adjust your thinking accordingly. Scientific inquiry does not let you pick and choose which data to believe; it mandates that everyone update their thinking when the data warrant an update.

Doing Scientific Inquiry

We define scientific inquiry in an operational sense—what does it mean to do scientific inquiry? What kind of process would satisfy all three descriptors: carefully planning an experience in advance; observing and trying to explain what you see; and, contributing to updating everyone’s thinking about an important phenomenon?

We define scientific inquiry as formulating , testing , and revising hypotheses about phenomena of interest.

Of course, we are not the only ones who define it in this way. The definition for the scientific method posted by the editors of Britannica is: “a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments” (Britannica, n.d. ).

An image represents the scientific inquiry definition given by the editors of Britannica and also defines the hypothesis on the basis of the experiments.

Notice how defining scientific inquiry this way satisfies each of the descriptors. “Carefully planning an experience in advance” is exactly what happens when formulating a hypothesis about a phenomenon of interest and thinking about how to test it. “ Observing a phenomenon” occurs when testing a hypothesis, and “ explaining ” what is found is required when revising a hypothesis based on the data. Finally, “updating everyone’s thinking” comes from comparing publicly the original with the revised hypothesis.

Doing scientific inquiry, as we have defined it, underscores the value of accumulating knowledge rather than generating random bits of knowledge. Formulating, testing, and revising hypotheses is an ongoing process, with each revised hypothesis begging for another test, whether by the same researcher or by new researchers. The editors of Britannica signaled this cyclic process by adding the following phrase to their definition of the scientific method: “The modified hypothesis is then retested, further modified, and tested again.” Scientific inquiry creates a process that encourages each study to build on the studies that have gone before. Through collective engagement in this process of building study on top of study, the scientific community works together to update its thinking.

Before exploring more fully the meaning of “formulating, testing, and revising hypotheses,” we need to acknowledge that this is not the only way researchers define research. Some researchers prefer a less formal definition, one that includes more serendipity, less planning, less explanation. You might have come across more open definitions such as “research is finding out about something.” We prefer the tighter hypothesis formulation, testing, and revision definition because we believe it provides a single, coherent map for conducting research that addresses many of the thorny problems educational researchers encounter. We believe it is the most useful orientation toward research and the most helpful to learn as a beginning researcher.

A final clarification of our definition is that it applies equally to qualitative and quantitative research. This is a familiar distinction in education that has generated much discussion. You might think our definition favors quantitative methods over qualitative methods because the language of hypothesis formulation and testing is often associated with quantitative methods. In fact, we do not favor one method over another. In Chap. 4 , we will illustrate how our definition fits research using a range of quantitative and qualitative methods.

Exercise 1.4

Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers that has some revised hypotheses that remain untested. Identify a revised hypothesis that you might like to test.

Unpacking the Terms Formulating, Testing, and Revising Hypotheses

To get a full sense of the definition of scientific inquiry we will use throughout this book, it is helpful to spend a little time with each of the key terms.

We first want to make clear that we use the term “hypothesis” as it is defined in most dictionaries and as it used in many scientific fields rather than as it is usually defined in educational statistics courses. By “hypothesis,” we do not mean a null hypothesis that is accepted or rejected by statistical analysis. Rather, we use “hypothesis” in the sense conveyed by the following definitions: “An idea or explanation for something that is based on known facts but has not yet been proved” (Cambridge University Press, n.d. ), and “An unproved theory, proposition, or supposition, tentatively accepted to explain certain facts and to provide a basis for further investigation or argument” (Agnes & Guralnik, 2008 ).

We distinguish two parts to “hypotheses.” Hypotheses consist of predictions and rationales . Predictions are statements about what you expect to find when you inquire about something. Rationales are explanations for why you made the predictions you did, why you believe your predictions are correct. So, for us “formulating hypotheses” means making explicit predictions and developing rationales for the predictions.

“Testing hypotheses” means making observations that allow you to assess in what ways your predictions were correct and in what ways they were incorrect. In education research, it is rarely useful to think of your predictions as either right or wrong. Because of the complexity of most issues you will investigate, most predictions will be right in some ways and wrong in others.

By studying the observations you make (data you collect) to test your hypotheses, you can revise your hypotheses to better align with the observations. This means revising your predictions plus revising your rationales to justify your adjusted predictions. Even though you might not run another test, formulating revised hypotheses is an essential part of conducting a research study. Comparing your original and revised hypotheses informs everyone of what you learned by conducting your study. In addition, a revised hypothesis sets the stage for you or someone else to extend your study and accumulate more knowledge of the phenomenon.

We should note that not everyone makes a clear distinction between predictions and rationales as two aspects of hypotheses. In fact, common, non-scientific uses of the word “hypothesis” may limit it to only a prediction or only an explanation (or rationale). We choose to explicitly include both prediction and rationale in our definition of hypothesis, not because we assert this should be the universal definition, but because we want to foreground the importance of both parts acting in concert. Using “hypothesis” to represent both prediction and rationale could hide the two aspects, but we make them explicit because they provide different kinds of information. It is usually easier to make predictions than develop rationales because predictions can be guesses, hunches, or gut feelings about which you have little confidence. Developing a compelling rationale requires careful thought plus reading what other researchers have found plus talking with your colleagues. Often, while you are developing your rationale you will find good reasons to change your predictions. Developing good rationales is the engine that drives scientific inquiry. Rationales are essentially descriptions of how much you know about the phenomenon you are studying. Throughout this guide, we will elaborate on how developing good rationales drives scientific inquiry. For now, we simply note that it can sharpen your predictions and help you to interpret your data as you test your hypotheses.

An image represents the rationale and the prediction for the scientific inquiry and different types of information provided by the terms.

Hypotheses in education research take a variety of forms or types. This is because there are a variety of phenomena that can be investigated. Investigating educational phenomena is sometimes best done using qualitative methods, sometimes using quantitative methods, and most often using mixed methods (e.g., Hay, 2016 ; Weis et al. 2019a ; Weisner, 2005 ). This means that, given our definition, hypotheses are equally applicable to qualitative and quantitative investigations.

Hypotheses take different forms when they are used to investigate different kinds of phenomena. Two very different activities in education could be labeled conducting experiments and descriptions. In an experiment, a hypothesis makes a prediction about anticipated changes, say the changes that occur when a treatment or intervention is applied. You might investigate how students’ thinking changes during a particular kind of instruction.

A second type of hypothesis, relevant for descriptive research, makes a prediction about what you will find when you investigate and describe the nature of a situation. The goal is to understand a situation as it exists rather than to understand a change from one situation to another. In this case, your prediction is what you expect to observe. Your rationale is the set of reasons for making this prediction; it is your current explanation for why the situation will look like it does.

You will probably read, if you have not already, that some researchers say you do not need a prediction to conduct a descriptive study. We will discuss this point of view in Chap. 2 . For now, we simply claim that scientific inquiry, as we have defined it, applies to all kinds of research studies. Descriptive studies, like others, not only benefit from formulating, testing, and revising hypotheses, but also need hypothesis formulating, testing, and revising.

One reason we define research as formulating, testing, and revising hypotheses is that if you think of research in this way you are less likely to go wrong. It is a useful guide for the entire process, as we will describe in detail in the chapters ahead. For example, as you build the rationale for your predictions, you are constructing the theoretical framework for your study (Chap. 3 ). As you work out the methods you will use to test your hypothesis, every decision you make will be based on asking, “Will this help me formulate or test or revise my hypothesis?” (Chap. 4 ). As you interpret the results of testing your predictions, you will compare them to what you predicted and examine the differences, focusing on how you must revise your hypotheses (Chap. 5 ). By anchoring the process to formulating, testing, and revising hypotheses, you will make smart decisions that yield a coherent and well-designed study.

Exercise 1.5

Compare the concept of formulating, testing, and revising hypotheses with the descriptions of scientific inquiry contained in Scientific Research in Education (NRC, 2002 ). How are they similar or different?

Exercise 1.6

Provide an example to illustrate and emphasize the differences between everyday learning/thinking and scientific inquiry.

Learning from Doing Scientific Inquiry

We noted earlier that a measure of what you have learned by conducting a research study is found in the differences between your original hypothesis and your revised hypothesis based on the data you collected to test your hypothesis. We will elaborate this statement in later chapters, but we preview our argument here.

Even before collecting data, scientific inquiry requires cycles of making a prediction, developing a rationale, refining your predictions, reading and studying more to strengthen your rationale, refining your predictions again, and so forth. And, even if you have run through several such cycles, you still will likely find that when you test your prediction you will be partly right and partly wrong. The results will support some parts of your predictions but not others, or the results will “kind of” support your predictions. A critical part of scientific inquiry is making sense of your results by interpreting them against your predictions. Carefully describing what aspects of your data supported your predictions, what aspects did not, and what data fell outside of any predictions is not an easy task, but you cannot learn from your study without doing this analysis.

An image represents the cycle of events that take place before making predictions, developing the rationale, and studying the prediction and rationale multiple times.

Analyzing the matches and mismatches between your predictions and your data allows you to formulate different rationales that would have accounted for more of the data. The best revised rationale is the one that accounts for the most data. Once you have revised your rationales, you can think about the predictions they best justify or explain. It is by comparing your original rationales to your new rationales that you can sort out what you learned from your study.

Suppose your study was an experiment. Maybe you were investigating the effects of a new instructional intervention on students’ learning. Your original rationale was your explanation for why the intervention would change the learning outcomes in a particular way. Your revised rationale explained why the changes that you observed occurred like they did and why your revised predictions are better. Maybe your original rationale focused on the potential of the activities if they were implemented in ideal ways and your revised rationale included the factors that are likely to affect how teachers implement them. By comparing the before and after rationales, you are describing what you learned—what you can explain now that you could not before. Another way of saying this is that you are describing how much more you understand now than before you conducted your study.

Revised predictions based on carefully planned and collected data usually exhibit some of the following features compared with the originals: more precision, more completeness, and broader scope. Revised rationales have more explanatory power and become more complete, more aligned with the new predictions, sharper, and overall more convincing.

Part II. Why Do Educators Do Research?

Doing scientific inquiry is a lot of work. Each phase of the process takes time, and you will often cycle back to improve earlier phases as you engage in later phases. Because of the significant effort required, you should make sure your study is worth it. So, from the beginning, you should think about the purpose of your study. Why do you want to do it? And, because research is a social practice, you should also think about whether the results of your study are likely to be important and significant to the education community.

If you are doing research in the way we have described—as scientific inquiry—then one purpose of your study is to understand , not just to describe or evaluate or report. As we noted earlier, when you formulate hypotheses, you are developing rationales that explain why things might be like they are. In our view, trying to understand and explain is what separates research from other kinds of activities, like evaluating or describing.

One reason understanding is so important is that it allows researchers to see how or why something works like it does. When you see how something works, you are better able to predict how it might work in other contexts, under other conditions. And, because conditions, or contextual factors, matter a lot in education, gaining insights into applying your findings to other contexts increases the contributions of your work and its importance to the broader education community.

Consequently, the purposes of research studies in education often include the more specific aim of identifying and understanding the conditions under which the phenomena being studied work like the observations suggest. A classic example of this kind of study in mathematics education was reported by William Brownell and Harold Moser in 1949 . They were trying to establish which method of subtracting whole numbers could be taught most effectively—the regrouping method or the equal additions method. However, they realized that effectiveness might depend on the conditions under which the methods were taught—“meaningfully” versus “mechanically.” So, they designed a study that crossed the two instructional approaches with the two different methods (regrouping and equal additions). Among other results, they found that these conditions did matter. The regrouping method was more effective under the meaningful condition than the mechanical condition, but the same was not true for the equal additions algorithm.

What do education researchers want to understand? In our view, the ultimate goal of education is to offer all students the best possible learning opportunities. So, we believe the ultimate purpose of scientific inquiry in education is to develop understanding that supports the improvement of learning opportunities for all students. We say “ultimate” because there are lots of issues that must be understood to improve learning opportunities for all students. Hypotheses about many aspects of education are connected, ultimately, to students’ learning. For example, formulating and testing a hypothesis that preservice teachers need to engage in particular kinds of activities in their coursework in order to teach particular topics well is, ultimately, connected to improving students’ learning opportunities. So is hypothesizing that school districts often devote relatively few resources to instructional leadership training or hypothesizing that positioning mathematics as a tool students can use to combat social injustice can help students see the relevance of mathematics to their lives.

We do not exclude the importance of research on educational issues more removed from improving students’ learning opportunities, but we do think the argument for their importance will be more difficult to make. If there is no way to imagine a connection between your hypothesis and improving learning opportunities for students, even a distant connection, we recommend you reconsider whether it is an important hypothesis within the education community.

Notice that we said the ultimate goal of education is to offer all students the best possible learning opportunities. For too long, educators have been satisfied with a goal of offering rich learning opportunities for lots of students, sometimes even for just the majority of students, but not necessarily for all students. Evaluations of success often are based on outcomes that show high averages. In other words, if many students have learned something, or even a smaller number have learned a lot, educators may have been satisfied. The problem is that there is usually a pattern in the groups of students who receive lower quality opportunities—students of color and students who live in poor areas, urban and rural. This is not acceptable. Consequently, we emphasize the premise that the purpose of education research is to offer rich learning opportunities to all students.

One way to make sure you will be able to convince others of the importance of your study is to consider investigating some aspect of teachers’ shared instructional problems. Historically, researchers in education have set their own research agendas, regardless of the problems teachers are facing in schools. It is increasingly recognized that teachers have had trouble applying to their own classrooms what researchers find. To address this problem, a researcher could partner with a teacher—better yet, a small group of teachers—and talk with them about instructional problems they all share. These discussions can create a rich pool of problems researchers can consider. If researchers pursued one of these problems (preferably alongside teachers), the connection to improving learning opportunities for all students could be direct and immediate. “Grounding a research question in instructional problems that are experienced across multiple teachers’ classrooms helps to ensure that the answer to the question will be of sufficient scope to be relevant and significant beyond the local context” (Cai et al., 2019b , p. 115).

As a beginning researcher, determining the relevance and importance of a research problem is especially challenging. We recommend talking with advisors, other experienced researchers, and peers to test the educational importance of possible research problems and topics of study. You will also learn much more about the issue of research importance when you read Chap. 5 .

Exercise 1.7

Identify a problem in education that is closely connected to improving learning opportunities and a problem that has a less close connection. For each problem, write a brief argument (like a logical sequence of if-then statements) that connects the problem to all students’ learning opportunities.

Part III. Conducting Research as a Practice of Failing Productively

Scientific inquiry involves formulating hypotheses about phenomena that are not fully understood—by you or anyone else. Even if you are able to inform your hypotheses with lots of knowledge that has already been accumulated, you are likely to find that your prediction is not entirely accurate. This is normal. Remember, scientific inquiry is a process of constantly updating your thinking. More and better information means revising your thinking, again, and again, and again. Because you never fully understand a complicated phenomenon and your hypotheses never produce completely accurate predictions, it is easy to believe you are somehow failing.

The trick is to fail upward, to fail to predict accurately in ways that inform your next hypothesis so you can make a better prediction. Some of the best-known researchers in education have been open and honest about the many times their predictions were wrong and, based on the results of their studies and those of others, they continuously updated their thinking and changed their hypotheses.

A striking example of publicly revising (actually reversing) hypotheses due to incorrect predictions is found in the work of Lee J. Cronbach, one of the most distinguished educational psychologists of the twentieth century. In 1955, Cronbach delivered his presidential address to the American Psychological Association. Titling it “Two Disciplines of Scientific Psychology,” Cronbach proposed a rapprochement between two research approaches—correlational studies that focused on individual differences and experimental studies that focused on instructional treatments controlling for individual differences. (We will examine different research approaches in Chap. 4 ). If these approaches could be brought together, reasoned Cronbach ( 1957 ), researchers could find interactions between individual characteristics and treatments (aptitude-treatment interactions or ATIs), fitting the best treatments to different individuals.

In 1975, after years of research by many researchers looking for ATIs, Cronbach acknowledged the evidence for simple, useful ATIs had not been found. Even when trying to find interactions between a few variables that could provide instructional guidance, the analysis, said Cronbach, creates “a hall of mirrors that extends to infinity, tormenting even the boldest investigators and defeating even ambitious designs” (Cronbach, 1975 , p. 119).

As he was reflecting back on his work, Cronbach ( 1986 ) recommended moving away from documenting instructional effects through statistical inference (an approach he had championed for much of his career) and toward approaches that probe the reasons for these effects, approaches that provide a “full account of events in a time, place, and context” (Cronbach, 1986 , p. 104). This is a remarkable change in hypotheses, a change based on data and made fully transparent. Cronbach understood the value of failing productively.

Closer to home, in a less dramatic example, one of us began a line of scientific inquiry into how to prepare elementary preservice teachers to teach early algebra. Teaching early algebra meant engaging elementary students in early forms of algebraic reasoning. Such reasoning should help them transition from arithmetic to algebra. To begin this line of inquiry, a set of activities for preservice teachers were developed. Even though the activities were based on well-supported hypotheses, they largely failed to engage preservice teachers as predicted because of unanticipated challenges the preservice teachers faced. To capitalize on this failure, follow-up studies were conducted, first to better understand elementary preservice teachers’ challenges with preparing to teach early algebra, and then to better support preservice teachers in navigating these challenges. In this example, the initial failure was a necessary step in the researchers’ scientific inquiry and furthered the researchers’ understanding of this issue.

We present another example of failing productively in Chap. 2 . That example emerges from recounting the history of a well-known research program in mathematics education.

Making mistakes is an inherent part of doing scientific research. Conducting a study is rarely a smooth path from beginning to end. We recommend that you keep the following things in mind as you begin a career of conducting research in education.

First, do not get discouraged when you make mistakes; do not fall into the trap of feeling like you are not capable of doing research because you make too many errors.

Second, learn from your mistakes. Do not ignore your mistakes or treat them as errors that you simply need to forget and move past. Mistakes are rich sites for learning—in research just as in other fields of study.

Third, by reflecting on your mistakes, you can learn to make better mistakes, mistakes that inform you about a productive next step. You will not be able to eliminate your mistakes, but you can set a goal of making better and better mistakes.

Exercise 1.8

How does scientific inquiry differ from everyday learning in giving you the tools to fail upward? You may find helpful perspectives on this question in other resources on science and scientific inquiry (e.g., Failure: Why Science is So Successful by Firestein, 2015).

Exercise 1.9

Use what you have learned in this chapter to write a new definition of scientific inquiry. Compare this definition with the one you wrote before reading this chapter. If you are reading this book as part of a course, compare your definition with your colleagues’ definitions. Develop a consensus definition with everyone in the course.

Part IV. Preview of Chap. 2

Now that you have a good idea of what research is, at least of what we believe research is, the next step is to think about how to actually begin doing research. This means how to begin formulating, testing, and revising hypotheses. As for all phases of scientific inquiry, there are lots of things to think about. Because it is critical to start well, we devote Chap. 2 to getting started with formulating hypotheses.

Agnes, M., & Guralnik, D. B. (Eds.). (2008). Hypothesis. In Webster’s new world college dictionary (4th ed.). Wiley.

Google Scholar  

Britannica. (n.d.). Scientific method. In Encyclopaedia Britannica . Retrieved July 15, 2022 from https://www.britannica.com/science/scientific-method

Brownell, W. A., & Moser, H. E. (1949). Meaningful vs. mechanical learning: A study in grade III subtraction . Duke University Press..

Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V., Cirillo, M., Kramer, S. L., & Hiebert, J. (2019b). Posing significant research questions. Journal for Research in Mathematics Education, 50 (2), 114–120. https://doi.org/10.5951/jresematheduc.50.2.0114

Article   Google Scholar  

Cambridge University Press. (n.d.). Hypothesis. In Cambridge dictionary . Retrieved July 15, 2022 from https://dictionary.cambridge.org/us/dictionary/english/hypothesis

Cronbach, J. L. (1957). The two disciplines of scientific psychology. American Psychologist, 12 , 671–684.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30 , 116–127.

Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 83–107). University of Chicago Press.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Merriam-Webster. (n.d.). Explain. In Merriam-Webster.com dictionary . Retrieved July 15, 2022, from https://www.merriam-webster.com/dictionary/explain

National Research Council. (2002). Scientific research in education . National Academy Press.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weisner, T. S. (Ed.). (2005). Discovering successful pathways in children’s development: Mixed methods in the study of childhood and family life . University of Chicago Press.

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). What Is Research, and Why Do People Do It?. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_1

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

learnlife-logo

  • Primary Programmes
  • Secondary Programmes
  • New: Village Hub
  • Neurodiversity Support
  • Summer Camps
  • Join a Virtual Info Session
  • Training for Schools
  • People Services
  • Join the Alliance
  • Join us at Inspire
  • The Learning Paradigm
  • Learning from the Best
  • Methodologies
  • Learning Terminologies
  • New Book: Reimagining Education
  • Who's Talking about Us
  • Meet our Team
  • Work with Us
  • Our Approach

Research-based learning

In the context of this listing, research-based learning refers to involving learners directly in authentic research projects. This has an impact on a variety of levels. The learner potentially gains significant motivation as a result of their participation in real-life research. This has been seen to be the catalyst for learners to delve deeply into topics.

In addition, through their involvement in actual research, learners become knowledgeable about the nature of research and their role as researchers. Research-based learning techniques are introduced to learners in as many contexts as possible, in order to develop their skills of interpretation, analysis and application.

More detail:

IRIS outlines their vision for involving learners directly into research projects:

The Research Institute for Schools

Our vision: a transformation of the student and teacher experience of science. Being involved in real science inspires young people and is the best professional development for teachers.

Thanks to ever more powerful technology, today's school students can access top level scientific data, collaborate with scientists around the globe, process information at lightning speed and develop innovative experimental ideas. They can put an experiment in space and contribute to scientific discovery. IRIS helps students and their teachers do this.

From our work to-date, we find when sixth form students take part in research, greater numbers go on to study science at university and take up careers in science and engineering.

An excellent example of a research-based learning approach is the UK Institute for Research in Schools (IRIS: http://www.researchinschools.org ):

IRIS makes cutting edge research projects open to school students and their teachers so that they can experience the excitement and challenge of science. We do this by making data accessible to schools, providing teacher training and resources, and by lending out scientific research equipment.

Another example comes from Warwick University in the UK:

https://warwick.ac.uk/services/ldc/resource/rbl/whatis/

In Research-based learning, research is regarded as a theme which underpins teaching at a range of levels. As well as incorporating outcomes of research into curricula, it includes developing students' awareness of processes and methods of enquiry, and creating an inclusive culture of research involving staff and students.

CPOM helps school students become Earth Observation researchers

A new project launched by IRIS is offering students the chance to contribute to scientific understanding of the polar regions. Funded by the UK Space Agency, MELT will allow schools to monitor changes at the poles using Earth Observation data.

Experts… will be helping students to understand the latest satellite Earth Observation data and investigate events such as iceberg calving, where recent dramatic changes suggest that environmental conditions have changed.

Dr Hogg said: “There are really exciting opportunities for students to work with Earth Observation scientists on major changes. We used Sentinel-1 satellite data to watch a giant iceberg four times the size of London broke free from Antarctica’s Larsen-C ice shelf in 2017, and now students can use the same data to measure if new icebergs calve off some of the fastest flowing glaciers in the world!”

Web resources:

Professor Becky Parker Introduces IRIS

IRIS helps increase girls taking engineering degrees by 200%

Teaching Research Method Using a Student-Centred Approach? Critical Reflections on Practice 

Application:

The intention would be to use research-based learning in a variety of ways. The vision will go broader than science. With a belief in the importance of authentic contexts for learning, as much as possible self-directed learners will be encouraged to take up any opportunity for involvement in industry-based research, whatever the relevant area of knowledge.

Less self-directed learners are encouraged to develop data based on research into issues or challenges. The intention is to develop both a mindset of research, but also the skills of research.

Research-based learning is one of many 25 learning methodologies in the Learnlife learning paradigm toolkit . Learn more about the different ways to engage learners through the different learning methodologies .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 27 August 2024

Learning motif-based graphs for drug–drug interaction prediction via local–global self-attention

  • Yi Zhong 1   na1 ,
  • Gaozheng Li 1   na1 ,
  • Ji Yang 1 ,
  • Houbing Zheng 2 ,
  • Yongqiang Yu 1 ,
  • Jiheng Zhang 1 ,
  • Heng Luo   ORCID: orcid.org/0000-0001-5192-8878 3 ,
  • Biao Wang   ORCID: orcid.org/0000-0001-6253-2713 2 &
  • Zuquan Weng   ORCID: orcid.org/0000-0002-1089-8673 1 , 2 , 4  

Nature Machine Intelligence ( 2024 ) Cite this article

Metrics details

  • Computational models
  • Drug safety

Unexpected drug–drug interactions (DDIs) are important issues for both pharmaceutical research and clinical applications due to the high risk of causing severe adverse drug reactions or drug withdrawals. Many deep learning models have achieved high performance in DDI prediction, but model interpretability to reveal the underlying causes of DDIs has not been extensively explored. Here we propose MeTDDI—a deep learning framework with local–global self-attention and co-attention to learn motif-based graphs for DDI prediction. MeTDDI achieved competitive performance compared with state-of-the-art models. Regarding interpretability, we conducted extensive assessments on 73 drugs with 13,786 DDIs and MeTDDI can precisely explain the structural mechanisms for 5,602 DDIs involving 58 drugs. Besides, MeTDDI shows potential to explain complex DDI mechanisms and mitigate DDI risks. To summarize, MeTDDI provides a new perspective on exploring DDI mechanisms, which will benefit both drug discovery and polypharmacy for safer therapies for patients.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

a research based meaning

Data availability

The datasets used for the model construction and supporting the findings of this study are available in the Supplementary Information or via Code Ocean at https://codeocean.com/capsule/1423077/tree/v3 (ref. 68 ). DrugBank 32 version 5.1.8 is available at https://go.drugbank.com/releases/5-1-8 . The ChEMBL29 50 dataset is available at https://chembl.gitbook.io/chembl-interface-documentation/downloads . The source of the AUC FC values for the DDI pairs are available via GitHub at https://github.com/harryscpt/pk-ddip . Source data are provided with this paper.

Code availability

The source code for this study is freely available via both Code Ocean (at https://codeocean.com/capsule/1423077/tree/v3 (ref. 68 )) and GitHub (at https://github.com/LabWeng/MeTDDI ).

Dagli, R. J. & Sharma, A. Polypharmacy: a global risk factor for elderly people. J. Int. Oral Health 6 , i–ii (2014).

Google Scholar  

Aggarwal, P., Woolford, S. J. & Patel, H. P. Multi-morbidity and polypharmacy in older people: challenges and opportunities for clinical practice. Geriatrics 5 , 85 (2020).

Article   Google Scholar  

Jiang, H. et al. Adverse drug reactions and correlations with drug-drug interactions: a retrospective study of reports from 2011 to 2020. Front. Pharmacol. 13 , 923939 (2022).

Hao, X. et al. Enhancing drug-drug interaction prediction by three-way decision and knowledge graph embedding. Granul. Comput. 8 , 67–76 (2023).

Yang, Z., Zhong, W., Lv, Q. & Yu-Chian Chen, C. Learning size-adaptive molecular substructures for explainable drug-drug interaction prediction by substructure-aware graph neural network. Chem. Sci. 13 , 8693–8703 (2022).

Zhang, X. et al. Molormer: a lightweight self-attention-based method focused on spatial structure of molecular graph for drug-drug interactions prediction. Brief. Bioinform. 23 , bbac296 (2022).

Ryu, J. Y., Kim, H. U. & Lee, S. Y. Deep learning improves prediction of drug-drug and drug-food interactions. Proc. Natl Acad. Sci. USA 115 , e4304–e4311 (2018).

Zhong, Y. et al. Emerging machine learning techniques in predicting adverse drug reactions. In Machine Learning and Deep Learning in Computational Toxicology 53–82 (Springer, 2023).

Zitnik, M., Agrawal, M. & Leskovec, J. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics 34 , i457–i466 (2018).

Karim, M. R. et al. Drug-drug interaction prediction based on knowledge graph embeddings and convolutional-LSTM network. In Proc. 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 113–123 (ACM, 2019).

Huang, K., Xiao, C., Hoang, T., Glass, L. & Sun, J. CASTER: predicting drug interactions with chemical substructure representation. In Proc. AAAI Conference on Artificial Intelligence 702–709 (2020).

Deng, Y. et al. META-DDIE: predicting drug-drug interaction events with few-shot learning. Brief. Bioinform. 23 , bbab514 (2022).

Xu, N., Wang, P., Chen, L., Tao, J. & Zhao, J. MR-GNN: multi-resolution and dual graph neural network for predicting structured entity interactions. In Proc. 28th International Joint Conference on Artificial Intelligence 3968–3974 (AAAI Press, 2019).

Li, Z. et al. DSN-DDI: an accurate and generalized framework for drug-drug interaction prediction by dual-view representation learning. Brief. Bioinform. 24 , bbac597 (2023).

Guo, Z. et al. Graph-based molecular representation learning. In Proc. Thirty-Second International Joint Conference on Artificial Intelligence 6638–6646 (2023).

Xiong, Z. et al. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. J. Med. Chem. 63 , 8749–8760 (2020).

Zhang, X. C. et al. MG-BERT: leveraging unsupervised atomic representation learning for molecular property prediction. Brief. Bioinform. 22 , bbab152 (2021).

Yu, Z. & Gao, H. Molecular representation learning via heterogeneous motif graph neural networks. In International Conference on Machine Learning 25581–25594 (PMLR, 2022).

Zhang, Z., Liu, Q., Wang, H., Lu, C. & Lee, C.-K. Motif-based graph self-supervised learning for molecular property prediction. In Proc. 35th International Conference on Neural Information Processing Systems 15870–15882 (Curran Associates, 2021).

Bucher, H. C., Achermann, R., Stohler, N. & Meier, C. R. Surveillance of physicians causing potential drug-drug interactions in ambulatory care: a pilot study in Switzerland. PLoS ONE 11 , e0147606 (2016).

Smithburger, P. L., Buckley, M. S., Bejian, S., Burenheide, K. & Kane-Gill, S. L. A critical evaluation of clinical decision support for the detection of drug-drug interactions. Expert Opin. Drug Saf. 10 , 871–882 (2011).

Tornio, A., Filppula, A. M., Niemi, M. & Backman, J. T. Clinical studies on drug-drug interactions involving metabolism and transport: methodology, pitfalls, and interpretation. Clin. Pharmacol. Ther. 105 , 1345–1361 (2019).

Kaushik, S., Prasun, C. & Sharma, D. Translational and disease bioinformatics. In Encyclopedia of Bioinformatics and Computational Biology 1046–1057 (Elsevier, 2019).

Jang, H. Y. et al. Machine learning-based quantitative prediction of drug exposure in drug-drug interactions using drug label information. npj Digit. Med. 5 , 88 (2022).

Hakkola, J., Hukkanen, J., Turpeinen, M. & Pelkonen, O. Inhibition and induction of CYP enzymes in humans: an update. Arch. Toxicol. 94 , 3671–3722 (2020).

Deodhar, M. et al. Mechanisms of CYP450 inhibition: understanding drug-drug interactions due to mechanism-based inhibition in clinical practice. Pharmaceutics 12 , 846 (2020).

Liu, N., Chen, C. B. & Kumara, S. Semi-supervised learning algorithm for identifying high-priority drug-drug interactions through adverse event reports. IEEE J. Biomed. Health Inform. 24 , 57–68 (2020).

Vo, T. H., Nguyen, N. T. K., Kha, Q. H. & Le, N. Q. K. On the road to explainable AI in drug-drug interactions prediction: a systematic review. Comput. Struct. Biotechnol. J. 20 , 2112–2123 (2022).

Wang, Y. et al. Identification of vital chemical information via visualization of graph neural networks. Brief. Bioinform. 24 , bbac577 (2023).

Orr, S. T. et al. Mechanism-based inactivation (MBI) of cytochrome P450 enzymes: structure-activity relationships and discovery strategies to mitigate drug-drug interaction risks. J. Med. Chem. 55 , 4896–4933 (2012).

Georgiev, K. D., Hvarchanova, N., Stoychev, E. & Kanazirev, B. Prevalence of polypharmacy and risk of potential drug-drug interactions among hospitalized patients with emphasis on the pharmacokinetics. Sci. Prog. 105 , 368504211070183 (2022).

Wishart, D. S. et al. DrugBank 5.0: a major update to the DrugBank database for 2018. Nucleic Acids Res. 46 , D1074–D1082 (2018).

Preissner, S. et al. SuperCYP: a comprehensive database on cytochrome P450 enzymes including a tool for analysis of CYP-drug interactions. Nucleic Acids Res. 38 , D237–D243 (2010).

Xiong, G. et al. DDInter: an online drug-drug interaction database towards improving clinical decision-making and patient safety. Nucleic Acids Res. 50 , D1200–D1207 (2022).

Center for Drug Evaluation and Research. New Drug Therapy Approvals 2023 (US FDA, 2023).

Kamel, A. & Harriman, S. Inhibition of cytochrome P450 enzymes and biochemical aspects of mechanism-based inactivation (MBI). Drug Discov. Today Technol. 10 , e177–e189 (2013).

Loos, N. H. C., Beijnen, J. H. & Schinkel, A. H. The mechanism-based inactivation of CYP3A4 by ritonavir: what mechanism? Int. J. Mol. Sci. 23 , 9866 (2022).

Rock, B. M., Hengel, S. M., Rock, D. A., Wienkers, L. C. & Kunze, K. L. Characterization of ritonavir-mediated inactivation of cytochrome P450 3A4. Mol. Pharmacol. 86 , 665–674 (2014).

Wang, Z. et al. Impact of paroxetine, a strong CYP2D6 inhibitor, on SPN-812 (viloxazine extended-release) pharmacokinetics in healthy adults. Clin. Pharmacol. Drug Dev. 10 , 1365–1374 (2021).

Harbeson, S. L. & Tung, R. D. Deuterium in drug discovery and development. Annu. Rep. Med. Chem. 46 , 403–417 (2011).

Li, Y. et al. Novel tetrazole-containing analogues of itraconazole as potent antiangiogenic agents with reduced cytochrome P450 3A4 inhibition. J. Med. Chem. 61 , 11158–11168 (2018).

Shou, M. et al. A kinetic model for the metabolic interaction of two substrates at the active site of cytochrome P450 3A4. J. Biol. Chem. 276 , 2256–2262 (2001).

Midde, N. M. et al. Effect of ethanol on the metabolic characteristics of HIV-1 integrase inhibitor elvitegravir and elvitegravir/cobicistat with CYP3A: an analysis using a newly developed LC-MS/MS method. PLoS ONE 11 , e0149225 (2016).

Palovaara, S. et al. Effect of an oral contraceptive preparation containing ethinylestradiol and gestodene on CYP3A4 activity as measured by midazolam 1'-hydroxylation. Br. J. Clin. Pharmacol. 50 , 333–337 (2000).

Guengerich, F. P., Waterman, M. R. & Egli, M. Recent structural insights into cytochrome P450 function. Trends Pharmacol. Sci. 37 , 625–640 (2016).

Bachmann, P. et al. Prevalence and severity of potential drug-drug interactions in patients with multiple sclerosis with and without polypharmacy. Pharmaceutics 14 , 592 (2022).

Van De Sijpe, G. et al. Overall performance of a drug-drug interaction clinical decision support system: quantitative evaluation and end-user survey. BMC Med. Inform. Decis. Mak. 22 , 48 (2022).

Louis, S. Y. et al. Graph convolutional neural networks with global attention for improved materials property prediction. Phys. Chem. Chem. Phys. 22 , 18141–18148 (2020).

Degen, J., Wegscheid-Gerlach, C., Zaliani, A. & Rarey, M. On the art of compiling and using ‘drug-like’ chemical fragment spaces. ChemMedChem 3 , 1503–1507 (2008).

Gaulton, A. et al. ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 40 , D1100–D1107 (2012).

Han, S. et al. HimGNN: a novel hierarchical molecular graph representation learning framework for property prediction. Brief. Bioinform. 24 , bbad305 (2023).

Vaswani, A. et al. Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017).

Wu, Z., Liu, Z., Lin, J., Lin, Y. & Han, S. Lite transformer with long-short range attention. In International Conference on Learning Representations (2020).

Dwivedi, V. P. & Bresson, X. A generalization of transformer networks to graphs. In AAAI Workshop on Deep Learning on Graphs: Methods and Applications (DLG-AAAI, 2021).

Wu, C., Wu, F. & Huang, Y. DA-Transformer: distance-aware transformer. In Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2059–2068 (NAACL 2021).

Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 4171–4186 (NAACL 2019).

Zhong, Y. et al. DDI-GCN: drug-drug interaction prediction via explainable graph convolutional networks. Artif. Intell. Med. 144 , 102640 (2023).

Jiang, D. et al. InteractionGraphNet: a novel and efficient deep graph representation learning framework for accurate protein-ligand interaction predictions. J. Med. Chem. 64 , 18209–18232 (2021).

Abadi, M. TensorFlow: learning functions at scale. In Proc. 21st ACM SIGPLAN International Conference on Functional Programming 1 (ACM, 2016).

Landrum, G. RDKit: a software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum 8 , 5281 (2013).

Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12 , 2825–2830 (2011).

MathSciNet   Google Scholar  

Harris, C. R. et al. Array programming with NumPy. Nature 585 , 357–362 (2020).

McKinney, W. pandas: a foundational Python library for data analysis and statistics. Python High Perf. Sci. Comput. 14 , 1–9 (2011).

Kwon, S. & Yoon, S. DeepCCI: end-to-end deep learning for chemical-chemical interaction prediction. In Proc. 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics 203–212 (ACM, 2017).

Huang, K., Xiao, C., Glass, L. M. & Sun, J. MolTrans: molecular interaction transformer for drug–target interaction prediction. Bioinformatics 37 , 830–836 (2021).

Pathak, Y., Laghuvarapu, S., Mehta, S. & Priyakumar, U. D. Chemically Interpretable Graph Interaction Network for prediction of pharmacokinetic properties of drug-like molecules. In Proc. AAAI Conference on Artificial Intelligence 873–880 (2020).

Lee, N. et al. Conditional Graph Information Bottleneck for molecular relational learning. In International Conference on Machine Learning 18852–18871 (PMLR, 2023).

Zhong, Y., Li, G.,Yang, J., Zheng, H., Yu, Y., Zhang, J., Luo, H., Wang, B. & Weng, Z. Learning motif-based graph for drug-drug interaction prediction via local-global self-attention. Code Ocean https://doi.org/10.24433/CO.0704680.v1 (2024).

Center for Drug Evaluation and Research. Clinical Drug Interaction Studies—Cytochrome P450 Enzyme- and Transporter-Mediated Drug Interactions Guidance for Industry (US FDA, 2020).

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (no. 81971837); Leading Project Foundation of Science and Technology, Fujian Province (2022Y0015); Joint Funds for the Innovation of Science and Technology, Fujian Province (2021Y9155); The Start-up Funds of Scientific Research Projects for Mutually Employed Experts; and The First Affiliated Hospital of Fujian Medical University (YJRCHP-2023WZQ). All the funding is awarded to Z.W.

Author information

These authors contributed equally: Yi Zhong, Gaozheng Li.

Authors and Affiliations

College of Computer and Data Science, Fuzhou University, Fuzhou, China

Yi Zhong, Gaozheng Li, Ji Yang, Yongqiang Yu, Jiheng Zhang & Zuquan Weng

Department of Plastic Surgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China

Houbing Zheng, Biao Wang & Zuquan Weng

MetaNovas Biotech Inc., Foster City, CA, USA

College of Biological Science and Engineering, Fuzhou University, Fuzhou, China

Zuquan Weng

You can also search for this author in PubMed   Google Scholar

Contributions

Y.Z., Z.W., B.W. and H.L. conceived and designed the work. Y.Z. and G.L. developed and implemented the model and analysed the data. J.Y., H.Z., Y.Y. and J.Z. mainly contributed to the data collection. All authors contributed to writing the paper.

Corresponding authors

Correspondence to Heng Luo , Biao Wang or Zuquan Weng .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Jimeng Sun and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–3, Table 5 and details of the molecular docking implementation.

Reporting Summary

Supplementary data 1.

Supplementary Tables 1–4 and 6–12.

Source data

Source data fig. 2.

Statistical source data.

Source Data Fig. 3

Source data fig. 4, rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Zhong, Y., Li, G., Yang, J. et al. Learning motif-based graphs for drug–drug interaction prediction via local–global self-attention. Nat Mach Intell (2024). https://doi.org/10.1038/s42256-024-00888-6

Download citation

Received : 13 September 2023

Accepted : 24 July 2024

Published : 27 August 2024

DOI : https://doi.org/10.1038/s42256-024-00888-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

a research based meaning

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Consumers Spend Loyalty Points and Cash Differently

  • So Yeon Chun,
  • Freddy Lim,
  • Ville Satopää

a research based meaning

Your loyalty strategy needs to consider four ways people value points.

Do consumers treat loyalty points the same way that they treat traditional money? And, how do they choose to spend one versus the other?  The authors of this article present research findings from their analysis of  data describing over 29,000 unique loyalty points earning and spending transactions made during two recent years by 500 airline loyalty program consumers.  They found that points users fell into four distinct categories: 1) Money advocates, who prefer cash over points, even when their value is identical in terms of purchasing power; 2)  Currency impartialists, who regard points and cash interchangeably, valuing them equally based on their financial worth; 3) Point gamers, who actively seek out the most advantageous point redemption opportunities, opting to spend points particularly when their value significantly surpasses that of cash; and 4) Point lovers, who value points more than money even if their purchase power is the same or lower. This article explores the strategic implications of these findings for companies that manage loyalty programs.

In the years since The Economist  spotlighted the astonishing scale of loyalty points — particularly frequent-flyer miles — as a potential global currency rivaling traditional money in 2005, usage has grown rapidly in size and scope. For example, the number of flight redemptions at Southwest Airlines doubled from 5.4 million in 2013 (representing 9.5% of revenue passenger miles) to 10.9 million in 2023 (representing 16.3% of revenue passenger miles).

  • SC So Yeon Chun is an Associate Professor of Technology & Operations Management at INSEAD, a  global business school with campuses in Abu Dhabi, France, and Singapore.
  • FL Freddy Lim is an Assistant Professor of Information Systems and Analytics at the National University of Singapore, School of Computing in Singapore
  • VS Ville Satopää is an Associate Professor of Technology and Operations Management at INSEAD, a  global business school with campuses in Abu Dhabi, France, and Singapore.

Partner Center

  • Introduction
  • Article Information

CKD indicates chronic kidney disease; CVD, cardiovascular disease; DPP4i, dipeptidyl peptidase-4 inhibitor; HF, heart failure; HHF, history of heart failure; PS, propensity score; RMST, restricted mean survival time; SGLT2i, sodium-glucose cotransporter-2 inhibitor.

CKD indicates chronic kidney disease; CVD, cardiovascular disease; DPP4i, dipeptidyl peptidase-4 inhibitor; PS, propensity score; RMST, restricted mean survival time; SGLT2i, sodium-glucose cotransporter-2 inhibitor.

eDescription 1. Sensitivity and Subgroup Analyses

eTable 1. Operational Definitions of Baseline Characteristics Defined by ICD-9-CM and ICD-10-CM Disease Diagnosis Codes

eTable 2. Operational Definitions of Study Outcomes Defined by ICD-9-CM and ICD-10-CM Disease Diagnosis Codes and Data Sources for Measurement

eTable 3. Baseline Characteristics of Study Cohorts Before Propensity Score Matching

eTable 4. Results of Cox Proportional Hazard Model Analyses on Study Outcomes Among Cohorts Using Either High-Dimensional Propensity-Score or Propensity-Score Weighting Procedures (Sensitivity Analyses)

eTable 5. Event Rate and Hazard Ratio Associated With SGLT2i Versus DPP4i Use in Hospitalization for Heart Failure Outcome (Subgroup Analyses)

eTable 6. Event Rate and Hazard Ratio Associated With SGLT2i Versus DPP4i Use in Chronic Kidney Disease Outcome (Subgroup Analyses)

eFigure 1. Flowchart of Study Cohort Selection

eFigure 2. Kernel Density Plots of Propensity Score for Study Cohorts (A) Before and (B) After Propensity Score Matching for Aim 1 and Those for Study Cohort (C) Before and (D) After Propensity Score Matching for Aim 2

eFigure 3. Kaplan-Meier Survival Curves of SGLT2i and DPP4i Users for (A) Hospitalization for Heart Failure, (B) 3P-MACE, (C) 4P-MACE, (D) Myocardial Infarction, (E) Stroke, (F) Cardiovascular Death, (G) All-Cause Death, (H) Chronic Kidney Disease, and (I) Dental Visits for Tooth Care

eDescription 2. Comparison and Interpretations of Restricted Mean Survival Time Analysis and Hazards Ratio Estimates

Data Sharing Statement

  • Errors in End Matter and Supplement 1 JAMA Network Open Correction January 24, 2023

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Peng Z , Yang C , Kuo S , Wu C , Lin W , Ou H. Restricted Mean Survival Time Analysis to Estimate SGLT2i–Associated Heterogeneous Treatment Effects on Primary and Secondary Prevention of Cardiorenal Outcomes in Patients With Type 2 Diabetes in Taiwan. JAMA Netw Open. 2022;5(12):e2246928. doi:10.1001/jamanetworkopen.2022.46928

Manage citations:

© 2024

  • Permissions

Restricted Mean Survival Time Analysis to Estimate SGLT2i–Associated Heterogeneous Treatment Effects on Primary and Secondary Prevention of Cardiorenal Outcomes in Patients With Type 2 Diabetes in Taiwan

  • 1 Institute of Clinical Pharmacy and Pharmaceutical Sciences, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 2 Division of Metabolism, Endocrinology & Diabetes, Department of Internal Medicine, University of Michigan Medical School, Ann Arbor
  • 3 Department of Family Medicine, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 4 Department of Family Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 5 Institute of Gerontology, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 6 Division of Nephrology, Department of Internal Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 7 Institute of Clinical Medicine, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • 8 Department of Pharmacy, College of Medicine, National Cheng Kung University, Tainan, Taiwan
  • Correction Errors in End Matter and Supplement 1 JAMA Network Open

Question   Is restricted mean survival time (RMST) analysis feasible to translate comparative cardiovascular and kidney outcomes of sodium-glucose cotransporter-2 inhibitor (SGLT2i) vs dipeptidyl peptidase-4 inhibitor (DPP4i) therapy in routine clinical settings?

Findings   In this comparative effectiveness study of 21 144 propensity score–matched pairs of patients with stable SGLT2i and DPP4i use for cardiovascular outcomes and 19 951 matched pairs for kidney outcomes, RMST analysis translated the SGLT2i-associated cardiorenal outcomes into clinical intuitive estimates, and estimated heterogenous treatment effects among patients with diverse characteristics in routine clinical settings.

Meaning   This study’s findings suggest the feasibility of using RMST analysis to supplement traditional survival analyses for investigating cardiorenal outcomes among diverse patients with SGLT2i treatment.

Importance   Increasing numbers of post hoc analyses have applied restricted mean survival time (RMST) analysis on the aggregated-level data from clinical trials to report treatment effects, but studies that use individual-level claims data are needed to determine the feasibility of RMST analysis for quantifying treatment effects among patients with type 2 diabetes in routine clinical settings.

Objectives   To apply RMST analysis for assessing sodium-glucose cotransporter-2 inhibitor (SGLT2i)–associated cardiovascular (CV) events and estimating heterogenous treatment effects (HTEs) on CV and kidney outcomes in routine clinical settings.

Design, Setting, and Participants   This comparative effectiveness study of Taiwan’s National Health Insurance Research Database examined 21 144 propensity score (PS)-matched pairs of patients with type 2 diabetes with SGLT2i and dipeptidyl peptidase-4 inhibitor (DPP4i) treatment for assessing CV outcomes, and 19 951 PS-matched pairs of patients with type 2 diabetes with SGLT2i and DPP4i treatment for assessing kidney outcomes. Patients were followed until December 31, 2018. Statistical analysis was performed from August 2021 to April 2022.

Exposures   Newly stable SGLT2i or DPP4i use in 2017.

Main Outcomes and Measures   Study outcomes were CV events including hospitalization for heart failure (HHF), 3-point major adverse CV events (3P-MACE: nonfatal myocardial infarction [MI], nonfatal stroke, and CV death), 4-point MACE (4P-MACE: HHF and 3P-MACE), and all-cause death, and chronic kidney disease (CKD). RMST and Cox modeling analyses were applied to estimate treatment effects on study outcomes.

Results   After PS matching, the baseline patient characteristics were comparable between 21 144 patients with stable SGLT2i use (eg, mean [SD] age: 58.3 [10.7] years; 11 990 [56.7%] male) and 21 144 patients with stable DPP4i use (eg, mean [SD] age: 58.1 [11.6] years; 12 163 [57.5%] male) for assessing CV outcomes, and those were also comparable between 19 951 patients with stable SGLT2i use (eg, mean [SD] age: 58.1 [10.7] years; 11 231 [56.2%] male) and 19 951 patients with stable DPP4i use (eg, mean [SD] age: 57.9 [11.5] years; 11 340 [56.8%] male) for assessing kidney outcome. The 2-year difference in RMST between patients with SGLT2i use and patients with DPP4i use was 4.99 (95% CI, 3.56-6.42) days for HHF, 4.12 (95% CI, 2.72-5.52) days for 3P-MACE, 7.72 (95% CI, 5.83-9.61) days for 4P-MACE, 1.26 (95% CI, 0.47-2.04) days for MI, 2.70 (95% CI, 1.57-3.82) days for stroke, 0.69 (95% CI, 0.28-1.11) days for CV death, 6.05 (95% CI, 4.89-7.20) days for all-cause death, and 14.75 (95% CI, 12.99-16.52) days for CKD. Directions of hazard ratios from Cox modeling analyses were consistent with RMST estimates. No association was found between study treatment and the negative control outcome (dental visits for tooth care). Consistent results across sensitivity analyses using high-dimensional PS-matched and PS-weighting approaches supported the validity of primary analysis results. Largest difference in RMST of SGLT2i vs DPP4i use for HHF and CKD was found among patients with established heart failure (30.80 [95% CI, 5.08-56.51] days) and retinopathy (40.43 [95% CI, 31.74-49.13] days), respectively.

Conclusions and Relevance   In this comparative effectiveness study, RMST analysis was feasible for translating treatment effects into more clinical intuitive estimates and valuable for quantifying HTEs among diverse patients in routine clinical settings.

Restricted mean survival time (RMST) analysis is considered as a supplement to traditional Cox proportional hazards model (PHM) analysis 1 , 2 in many fields (eg, cardiology, 2 - 4 oncology, 5 - 7 diabetes 8 , 9 ). RMST refers to the mean survival time from an event over a specific (restricted) time horizon. 2 , 3 , 8 , 10 The absolute difference in RMST between treatments provides an anchor for quantifying treatment effect without imposing any model assumptions. 1 Unlike RMST analysis, Cox PHM analysis requires fulfilling the assumption of constant proportional hazards between comparison groups over time to provide valid hazard ratios (HRs) for quantifying treatment effect. In other words, the HR would be difficult to interpret when the constant hazard assumption may not be true. Also, considering that HRs are relative estimates and baseline hazards of the comparison or control group at each time point are not always explicitly provided, the absolute magnitude of hazards in the treatment group is unclear. In contrast, a difference in RMST using the RMST from the comparison or control group as a reference value can quantify treatment effect in a more clinical meaningful manner. Hence, RMST analysis provides more clinically interpretable estimates which can explicitly translate the treatment benefits into event-free time (eg, days) and further be applied to estimate health care savings and humanistic benefits (eg, quality of life) more intuitively.

RMST analysis was recently used for reporting treatment effect in endocrinology medicine, 8 , 9 but they were post hoc analyses for cardiovascular outcomes and mortality trials in type 2 diabetes. In addition, due to lacking access to original trial records, previous studies reconstructed individual time-to-event data from published Kaplan-Meier (KM) curves to assess the RMST of treatment groups over the trial follow-up period. 2 , 8 However, the validity of RMST estimates is of concern given the uncertainty regarding the consistency between the original and reconstructed data. 11 Moreover, due to highly selective trial participants and a placebo (or usual care)–controlled trial design, the generalizability of trial findings to routine clinical settings, which comprise diverse patient populations and multiple competing treatments, is limited. Hence, the comparative effectiveness research using patient-level, claims data are warranted to assess the applicability of RMST analysis in routine clinical settings and affirm that difference in RMST can be used as a patient-centered measure of treatment benefit.

This study first determined the applicability of RMST analysis to estimate the treatment outcomes in routine clinical settings using well-known cardiovascular (CV) benefits associated with sodium-glucose cotransporter-2 inhibitors (SGLT2is) (eg, hospitalization for heart failure [HHF]) as an example. Then, given limited studies on SGLT2i-asscociated kidney outcomes, we further used RMST and Cox PHM analyses to estimate the SGLT2i-associated treatment outcomes in chronic kidney disease (CKD), and then explored the heterogeneous treatment effects (HTEs) on HHF and CKD among patient populations with diabetes in routine clinical settings.

This comparative effectiveness study used an active-comparator and new-user design 12 based on Taiwan’s National Health Insurance Research Database (NHIRD). Briefly, Taiwan’s NHI program covers health care services for more than 99% of the Taiwanese population. The NHIRD comprises individual-level, encrypted, and deidentified health care records, including outpatient visits, emergency department visits, inpatient admissions, and prescription information. 13 This study was approved by the National Cheng Kung University Hospital institutional review board and informed patient consent was waived because the study used deidentified patient data. This study followed the International Society for Pharmacoeconomics and Outcomes Research ( ISPOR ) reporting guideline.

First, patients with type 2 diabetes with stable use of SGLT2is or dipeptidyl peptidase-4 inhibitors (DPP4is) were identified to avoid misclassification of study cohorts due to including short-term use of study drugs (eFigure 1 in Supplement 1 ). The first date of treatment initiation in 2017 was defined as the index date. Second, to include new users of study drugs, patients with exposure to either SGLT2is or DPP4is in the year prior to the index date were excluded. Third, patients aged less than 18 years at the index date or with death records before the index date by the Cause of Mortality data were excluded to avoid misclassification of study events. Fourth, considering that SGLT2i therapy is not recommended to patients with severe kidney impairment, 14 patients who had any medical records of end-stage kidney disease or kidney transplantation before the index date by the Registry for Catastrophic Illness Patients were excluded. Lastly, patients with both SGLT2i and DPP4i use at the index date were excluded to ensure that SGLT2i and DPP4i groups were mutually exclusive. The study cohort for aim 1 (CV outcomes) was obtained using the aforementioned operational steps. Based on this cohort population, patients with any CKD diagnoses in the outpatient or inpatient files of the NHIRD within 1 year prior to the index date were excluded to obtain another study cohort for aim 2 (kidney outcome).

To enhance the between-group comparability and the control for confounding by indication, patients with stable SGLT2i use and patients with stable DPP4i use were 1:1 matched based on 5-to-1 digit propensity score (PS) greedy matching 15 in the primary analyses. The PS for each study participant was estimated using a logistic regression model analysis to model SGLT2is vs DPP4is as a function of a series of baseline patient characteristics ( Table 1 ). 16 , 17 The operational definitions of baseline patient comorbidity characteristics are detailed in eTable 1 in Supplement 1 . Of note, the PS matching (PSM) procedures were separately performed in aim 1 (CV outcomes) and aim 2 (kidney outcome) cohorts.

The Anatomical Therapeutic Chemical Classification System from the World Health Organization was applied to identify drug exposure. The study outcomes for aim 1 13 were CV events including HHF, 3-point major adverse cardiovascular events (3P-MACE: nonfatal stroke, nonfatal myocardial infarction [MI], or CV death), 4P-MACE (HHF or 3P-MACE), nonfatal stroke, nonfatal MI, CV death, and all-cause death, and the study outcome for aim 2 was CKD (eTable 2 in Supplement 1 ). Mortality status was ascertained using the Cause of Mortality data in the NHIRD. Patients were followed from the index date until the occurrence of a study outcome, loss of follow-up, or December 31, 2018, whichever came first.

The standardized mean difference (SMD) was used to determine the balance in baseline patient characteristics between treatment groups before and after the matching, with an absolute value of SMD less than 0.1 indicating an insignificant between-group difference. Primary analyses included RMST and Cox PHM analyses on each study outcome. Briefly, the KM survival curves of study outcomes for treatment groups over a specific time interval were first plotted based on individual-level time-to-event data and the RMST of each drug group was then estimated based on the area under the KM curve. 18 The time horizon for a given outcome was determined as the minimum of the largest followed-up time of patients with SGLT2i use and patients with DPP4i use. The difference in RMST between the SGLT2i and DPP4i groups with the associated 95% CI was finally estimated for each study outcome. The hazard ratios (HRs) and associated 95% CIs with SGLT2i vs DPP4i use for each study outcome from the Cox model analyses were also estimated for comparison with the estimates of difference in RMST in terms of the direction of findings. Statistical significance was determined if 95% CI for difference in RMST did not overlap with 0 or if 95% CI for HRs did not overlap with 1.

Several sensitivity analyses using a negative control outcome (ie, dental visits for tooth care), 19 , 20 high-dimension propensity score–matching, 21 - 24 and PS-weighting 25 , 26 were performed to explore and diminish the effect of potential confounders that arise from the use of claims data in the estimation of RMST and HRs. A series of subgroup analyses based on baseline patient characteristics were performed to evaluate SGLT2i-associated HTEs for HHF and CKD. Sensitivity and subgroup analyses are detailed in eDescription 1 in Supplement 1 . All analyses were performed from August 2021 to April 2022 and using SAS software version 9.4 (SAS Institute).

There were 21 144 PS-matched pairs of patients with stable use of SGLT2i and patients with stable use of DPP4i included in the study cohort for aim 1 (CV outcomes) and 19 951 PS-matched pairs included for aim 2 (kidney outcome) (eFigure 1 in Supplement 1 ). As shown in Table 1 , after PSM, the baseline patient characteristics achieved satisfactory between-group balance among patients with SGLT2i use (eg, mean [SD] age: 58.3 [10.7] years; 11 990 [56.7%] male; mean [SD] diabetes duration: 8.3 [3.1] years; 4059 [19.2%] with established CV diseases) and patients with DPP4i use (eg, mean [SD] age: 58.1 [11.6] years; 12 163 [57.5%] male; mean [SD] diabetes duration: 8.3 [3.2] years; 4036 [19.0%] with established CV diseases) for assessing CV outcomes, and those were also comparable between patients with SGLT2i use (eg, mean [SD] age: 58.1 [10.7] years; 11 231 [56.2%] male; mean [SD] diabetes duration: 8.3 [3.2] years; 3784 [18.9%] with established CV diseases) and patients with DPP4i use (eg, mean [SD] age: 57.9 [11.5] years; 11 340 [56.8%] male; mean [SD] diabetes duration: 8.2 [3.2] years; 3759 [18.8%] with established CV diseases) for assessing the kidney outcome. eTable 3 in Supplement 1 provides patient baseline characteristics before the matching. The detailed PS distribution is shown in eFigure 2 in Supplement 1 .

Table 2 indicates that over 2 years, the RMSTs of patients with SGLT2i use and patients with DPP4i use for CV outcomes ranged from 713.18 (95% CI, 711.9-714.4) days (for 4P-MACE) to 728.25 (95% CI, 728.0-728.5) days (for CV death) and from 705.35 (95% CI, 703.8-706.8) days (for 4P-MACE) to 727.55 (95% CI, 727.2-727.8) days (for CV death), respectively; those of patients with SGLT2i use and CKD was 720.48 (95% CI, 719.5-721.3) days and patients with DPP4i use and CKD was 705.52 (95% CI, 703.9-707.0) days. Accordingly, the use of SGLT2is vs DPP4is delayed the mean time from the occurrence of HHF by 4.99 (95% CI, 3.56 to 6.42) days, 3P-MACE by 4.12 (95% CI, 2.72-5.52) days, 4P-MACE by 7.72 (95% CI, 5.83-9.61) days, MI by 1.26 (95% CI, 0.47-2.04) days, stroke by 2.70 (95% CI, 1.57-3.82) days, CV death by 0.69 (95% CI, 0.28-1.11) days, all-cause death by 6.05 (95% CI, 4.89-7.20) days, and CKD by 14.75 (95% CI, 12.99-16.52) days. Additionally, the HR of SGLT2is vs DPP4is using Cox model analyses was 0.61 (95% CI, 0.53 to 0.70) for HHF, 0.70 (0.61-0.79) for 3P-MACE, 0.67 (0.60-0.73) for 4P-MACE, 0.71 (0.57-0.90) for MI, 0.69 (0.59-0.80) for stroke, 0.51 (0.35-0.76) for CV death, 0.46 (0.39-0.53) for all-cause death, and 0.38 (0.33-0.43) for CKD. Details of survival curves of the SGLT2i and DPP4i groups for each outcome are given in eFigure 3 in Supplement 1 .

No statistical association between SGLT2i vs DPP4i use and tooth care was found (ie, difference in RMST of 0.09 days [95% CI, –0.19 to 0.38 days] and HR of 0.78 [0.41-1.47] were found for SGLT2i vs DPP4i use over 2 years). The magnitudes of postponement of study outcomes from using SGLT2is vs DPP4is and associated HRs in the sensitivity analyses based on high-dimensional PS-matched cohorts ( Table 3 ) are in line with primary analysis findings based on PS-matched cohorts ( Table 2 ). HHF outcomes were postponed by an RMST of 3.77 (95% CI, 2.36 to 5.18) days, 3P-MACE outcomes postponed by 2.55 (95% CI, 1.20-3.91) days, 4P-MACE outcomes postponed by 5.13 (95% CI, 3.29-6.97) days, MI outcomes postponed by 0.89 (95% CI, 0.15-1.64) days, stroke outcomes postponed by 1.45 (95% CI, 0.36-2.55) days, CV death postponed by 0.56 (95% CI, 0.14-0.97), all-cause death postponed by 3.56 (95% CI, 2.48-4.63), and CKD outcomes by 15.46 (95% CI, 13.58-17.34) days, by using SGLT2is vs DPP4is. In addition, the estimated HRs derived from 3 PS-weighted pseudocohorts (ie, IPTW, stabilized IPTW, SMRW) for CV and kidney outcomes (eTable 4 in Supplement 1 ) were comparable with the results from the analyses of PS-matched cohorts ( Table 2 ).

All subgroup analyses on the HHF outcome show significantly favorable results (in terms of difference in RMST and estimated HRs) for the use of SGLT2is vs DPP4is, except for the analyses among patients with a diabetes duration of shorter than 8 years and those with CKD history. However, treatment heterogeneity of SGLT2i vs DPP4i use on HHF across these subgroups was found, given a wide range of estimates of difference in RMST from 1.65 (95% CI, –0.52 to 3.82) days for patients with a diabetes duration of shorter than 8 years to 30.80 (95% CI, 5.08-56.51) days for those with established HF events ( Figure 1 ). Treatment heterogeneity on CKD outcome across patient subgroups was also found, with the smallest between-treatment group difference in RMST among patients receiving metformin (9.01 [95% CI, 6.47-11.55] days) and the largest RMST difference in the patient group with a history of retinopathy (40.43 [31.74-49.13] days) ( Figure 2 ). Detailed event rates in patient subgroups are shown in eTables 5 and 6 in Supplement 1 .

To our knowledge, this study is the first to support the applicability of RMST analyses in a routine clinical setting. Several rigorous methodologies were applied to overcome potential confounding and biases that arise with the use of claims data and a series of sensitivity analyses were conducted to confirm study robustness. Furthermore, HTEs in diverse patients under clinical practice were reported to facilitate individualized medicine. RMST estimates are both informative and interpretable for patients regarding their expectation of treatment benefit at its initiation, and thereby support their decisions to undertake and even adhere to treatment. 8 Additionally, because RMST analyses translate the treatment-related survival benefit into a more clinically meaningful measure (eg, event-free time), corresponding health care savings and humanistic benefits (eg, quality of life) can be intuitively estimated. 27 - 29 Therefore, RMST results could be an alternative metric for demonstrating the value of treatment in health technology assessment or reassessment, especially when the violation of proportional hazards is of concern, 30 to support health care policy decisions and resource allocation.

Previous trials reported that compared with placebo or sulfonylurea, SGLT2i therapy delayed the occurrence of 3P-MACE by 4.1 to 32 days. 8 This study using claims data found that the use of SGLT2is vs DPP4is postponed CV events by 0.69 to 7.72 days ( Table 2 ). Although direct comparison between clinical trials and studies using claims data should be done with caution due to the considerable differences in the study design and population (eg, inclusion and exclusion criteria, study follow-up, comparison groups), the SGLT2i-associated CV benefits indicated by RMST estimates are consistent between the present analysis using claims data and previous trials. This supports our study validity and supports the applicability of RMST analyses in routine clinical settings. Therefore, it is important to highlight the methodologies applied in this study to ensure the validity of RMST analyses in the assessment of treatment outcomes in routine clinical settings, including the PSM procedures used to achieve a greater level of between-group comparability, 25 the negative control outcome analysis to corroborate the validity of study data and analytic procedures, 20 high-dimensional PS techniques for eliminating potential unmeasured confounding in studies using claims data, which may provide more conservative estimates, 21 - 24 and PS weighting methods (ie, IPTW, stabilized IPTW), which retained most of the original study cohort, to support the external validity of our findings. 25 , 26 These rigorous methodologies allow RMST analyses to be conducted in routine clinical settings.

Difference in RMST can facilitate the making of intuitive inferences regarding relative treatment effects across different clinical outcomes and patient subgroups. 7 In contrast, based on HRs alone, comparative treatment effects across different outcomes might not be explicit because baseline hazards for the control group could be different for different outcomes. 7 , 31 Taking HHF and CV death outcomes as examples, although the use of SGLT2is vs DPP4is was associated with 39% and 49% decreases in the risk of HHF and CV death (HRs, 0.61 and 0.51), respectively, inference from a direct comparison of these estimates should be very done with caution because the association of DPP4is with the 2 outcomes is apparently different (eFigure 3 in Supplement 1 ). In contrast, the effect of the comparator group is ascertained in RMST analyses, which further facilitates the estimation of difference in RMST. Such absolute values can be applied to compare treatment effects across different outcomes. According to the estimates of difference in RMST, the absolute delay times from the occurrence of HHF and CV death while using SGLT2is vs DPP4is were 4.99 and 0.69 days, respectively. Based on such intuitive values, one could conclude that SGLT2i therapy yielded a more HHF-event-free benefit compared to its survival benefits from CV death. Further details of comparison and interpretation of RMST and HR estimates are provided in eDescription 2 in Supplement 1 .

The utility of RMST analyses in this study using claims data was also demonstrated by the quantification of HTEs into a clinically meaningful measure. 32 That is, the use of SGLT2is vs DPP4is could delay HHF occurrence by the shortest delay of 1.65 days in patients with diabetes duration less than 8 years to the longest delay of 30.80 days among those with established HF ( Figure 1 ). Hence, in studies of treatment effects under clinical practice, RMST analyses should be used to supplement traditional survival analyses using Cox modeling. That is, RMST estimates reveal the magnitude of the treatment effect for individual treatments and the comparative effects of different treatment groups, and across different outcomes and diverse patient subgroups in routine clinical settings. Therefore, RMST estimates together with HRs could optimize clinical communication and treatment decisions in routine clinical settings.

Given limited studies on the association of SGLT2i therapy and the prevention of CKD among patients with diabetes, the HR and RMST estimates in this study add supporting evidence for the SGLT2i-associated benefit for the prevention of incident CKD under clinical practice. Specifically, the use of SGLT2is vs DPP4is was associated with a 62% decreased risk for CKD (HRs of 0.38 [95% CI, 0.33-0.43] and 0.38 [95% CI, 0.34-0.43] for PSM-matched and high-dimensional PS-matched cohorts, respectively), which fell within the range of previously reported HR estimates (ie, 0.29 [95% CI, 0.22 to 0.38] 33 to 0.44 [95% CI, 0.28-0.69] 34 ). Additionally, the estimated difference in RMST in delaying CKD while using SGLT2is vs DPP4is over 2 years was 14.75 (95% CI, 12.99-16.52) days (RMST for patients with stable SGLT2i use: 720.48 [95% CI, 719.56-721.39] days; RMST for patients with stable DPP4i use: 705.52 [95% CI, 703.98-707.06] days); these are informative and intuitive values for physicians and patients (ie, they indicate that SGLT2i therapy could extend the event-free time of CKD by nearly half a month over 2 years). Moreover, our subgroup analysis results ( Figure 2 ) support kidney benefits with SGLT2i therapy across diverse patient populations in routine clinical settings. Patients who have diabetes with retinopathy may benefit most from SGLT2is (ie, using SGLT2is vs DPP4is among patients who have diabetes with retinopathy could delay the occurrence of CKD by approximately 40 days). Diabetic retinopathy may be representative of systemic microvascular damage secondary to diabetes and is associated with composite kidney end points. 35 , 36 This implies that among patients with diabetes and retinopathy in routine clinical settings who are known to be at an increased risk for developing CKD, 37 , 38 timely intervention with SGLT2i therapy may maximize treatment benefit in terms of long-term kidney outcomes of patients.

Several limitations of this study should be acknowledged. First, patient laboratory records (eg, HbA 1c ) reflecting disease severity were not available in our study administrative data. This may affect the validity of outcome assessment. To minimize this concern, a large number of measurable indicators were considered in the PSM procedure and the unmeasurable confounder issue was addressed using advanced hdPS techniques. Second, because of the limited study period (ie, 2 years), the postponements of the occurrence of CV and kidney events following SGLT2i vs DPP4i initiation (presented in days) were relatively small (ie, few days to 1 month). Additionally, inference based on our RMST results was limited to 2 years, and the extrapolation of RMST estimates beyond this prespecific time was prohibited. Despite the limited study period, estimates of difference in RMST were still statistically significant across all analyses. Therefore, one may expect that difference in RMST could be magnified with a longer study follow-up period. Lastly, because the RMST analyses were based on KM curves, our RMST estimates might experience problems commonly seen with KM curves (eg, competing risk, noninformative censoring).

Along with the application of rigorous methodologies to minimize potential confounding and biases that arise with the use of claims data, this study adds supporting clinical research evidence for the feasibility of using RMST analyses as a supplement to traditional survival analyses for investigating treatment effects in routine clinical settings. Our assessments of comparative CV and kidney outcomes of SGLT2i vs DPP4i therapy provide translated and intuitive evidence that can facilitate patient-clinician communication to optimize shared decision-makings and quantify HTEs among diverse patient populations to promote individualized medicine in routine clinical settings.

Accepted for Publication: October 28, 2022.

Published: December 15, 2022. doi:10.1001/jamanetworkopen.2022.46928

Correction: This article was corrected on January 24, 2023, to fix errors in the end matter and Supplement 1.

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2022 Peng ZY et al. JAMA Network Open .

Corresponding Author: Huang-Tz Ou, PhD, Institute of Clinical Pharmacy and Pharmaceutical Sciences, College of Medicine, National Cheng Kung University, 1 University Rd, Tainan 701, Taiwan ( [email protected] ).

Author Contributions: Dr Ou had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Peng, Yang, Wu, Lin, Ou.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Peng, Lin, Ou.

Critical revision of the manuscript for important intellectual content: Yang, Kuo, Wu, Ou.

Statistical analysis: Peng, Yang, Kuo, Ou.

Obtained funding: Wu, Ou.

Administrative, technical, or material support: Wu, Ou.

Supervision: Wu, Lin, Ou.

Conflict of Interest Disclosures: Dr Wu reported receiving honorarium for lectures, attending meetings, and/or travel from Eli Lilly, Roche, Amgen, Merck, Servier Laboratories, GE Lunar, Harvester, AstraZeneca, Novartis, TCM Biotech, and Alvogen/Lotus. No other disclosures were reported.

Funding/Support: This project was supported by grants from Ministry of Science and Technology in Taiwan (grant MOST 109-2320-B-006 −047-MY3) (recipient: Dr Ou) and AstraZeneca Taiwan Limited (recipient: Dr Ou).

Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: We are grateful to AstraZeneca Taiwan Limited for funding this project and the Health Data Science Center, National Cheng Kung University Hospital, for providing administrative and technical support.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

IMAGES

  1. What is Research

    a research based meaning

  2. Research: Meaning, Definition, Importance & Types

    a research based meaning

  3. Types of Research Methodology: Uses, Types & Benefits

    a research based meaning

  4. Types of Research

    a research based meaning

  5. Research

    a research based meaning

  6. Definition of research

    a research based meaning

COMMENTS

  1. Evidence-Based? Research-Based? What does it all Mean?

    Clarifying the Difference between Research-Based and Evidence-Based. My current working definition of research-based instruction has come to mean those practices/programs that are based on well-supported and documented theories of learning. The instructional approach is based on research that supports the principles it incorporates, but there ...

  2. "Evidence-Based" vs. "Research-Based"

    Evidence-Informed (or Research-Based ) Practices are practices that were developed based on the best research available in the field. This means that users can feel confident that the strategies and activities included in the program or practice have a strong scientific basis for their use. Unlike Evidence-Based Practices or Programs, Research ...

  3. Research Based Learning: a Lifelong Learning Necessity

    A key component of research-based learning is the identification and clarification of issues, problems, challenges and questions for discussion and exploration. The learner is able to seek relevancy in the work they are doing and to become deeply involved in the learning process. b. Find and process information.

  4. Evidence-Based vs. Research-Based Programs: Definitions and ...

    A research-based program is a program designed based on scientific theories. With this type of program, an education researcher may develop an intervention based on research from educational theories and published studies. The researcher can describe their program as research-based because they used existing analyses and theories to develop it.

  5. Evidence-Based Research Series-Paper 1: What Evidence-Based Research is

    Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner. Results: We describe evidence-based research and provide an overview of the approach of systematically and transparently using previous ...

  6. Research-Based Definition

    Research-based refers to any educational concept or strategy that is derived from or informed by objective academic research or metrics of performance.

  7. Evidence-Based Definition

    Evidence-Based. A widely used adjective in education, evidence-based refers to any concept or strategy that is derived from or informed by objective evidence—most commonly, educational research or metrics of school, teacher, and student performance. Among the most common applications are evidence-based decisions, evidence-based school ...

  8. What Evidence-Based Research is and why is it important?

    Evidence-Based Research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient and accessible ...

  9. What Is Research?

    Research is the deliberate, purposeful, and systematic gathering of data, information, facts, and/or opinions for the advancement of personal, societal, or overall human knowledge. Based on this definition, we all do research all the time. Most of this research is casual research. Asking friends what they think of different restaurants, looking ...

  10. Evidence-Based Practice and Nursing Research

    Evidence-based practice is now widely recognized as the key to improving healthcare quality and patient outcomes. Although the purposes of nursing research (conducting research to generate new knowledge) and evidence-based nursing practice (utilizing best evidence as basis of nursing practice) seem quite different, an increasing number of research studies have been conducted with the goal of ...

  11. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  12. Full article: Is research-based learning effective? Evidence from a pre

    The effectiveness of research-based learning. Conducting one's own research project involves various cognitive, behavioural, and affective experiences (Lopatto, Citation 2009, 29), which in turn lead to a wide range of benefits associated with RBL. RBL is associated with long-term societal benefits because it can foster scientific careers: Students participating in RBL reported a greater ...

  13. Evidence-Based Research Series-Paper 1: What Evidence-Based Research is

    Evidence-based research (EBR) is the systematic and transparent use of prior research to inform a new study so that it answers questions that matter in a valid, efficient, and accessible manner. This study surveyed experts about existing (e.g., citation analysis) and new methods for monitoring EBR and collected ideas about implementing these ...

  14. Research

    Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth.

  15. Evidence-Based Practice: A Common Definition Matters

    The APA Presidential Task Force on EBP (2006) also shared its support of the original EBP definition: " Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences" (p. 272).

  16. Science-based, Research-based, Evidence-based: What's the ...

    Research-based - Parts or components of the program or method are based on practices demonstrated effective through Research. Evidence-based - The entire program or method has been demonstrated through Research to be effective. We want to point out that Evidence-based is not always fool-proof. There is no regulatory authority that guards the ...

  17. The Evidence for Evidence-Based Practice Implementation

    Steps of Evidence-Based Practice. Steps of promoting adoption of EBPs can be viewed from the perspective of those who conduct research or generate knowledge, 23, 37 those who use the evidence-based information in practice, 16, 31 and those who serve as boundary spanners to link knowledge generators with knowledge users. 19 Steps of knowledge transfer in the AHRQ model 37 represent three major ...

  18. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  19. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  20. Research-based learning

    In Research-based learning, research is regarded as a theme which underpins teaching at a range of levels. As well as incorporating outcomes of research into curricula, it includes developing students' awareness of processes and methods of enquiry, and creating an inclusive culture of research involving staff and students. ...

  21. Basic research

    Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most ...

  22. Research-based Definition: 463 Samples

    Research-based means research that is based on the neuro-biological, behavioral and social sciences that has led to major advances in understanding the conditions that influence whether children get off to a promising or worrisome start in life."RFGA" means an a Request for Grant Application (RFGA). Sample 1 Sample 2 Sample 3.

  23. Research base Definition

    Research-based means a program or practice that has some research demonstrating effectiveness, but that does not yet meet the standard of evidence-based practices. Development Program means the implementation of the development plan. Collaboration has the meaning set forth in Section 2.1.

  24. What You Should Know About Plant-Based Diets

    Following a plant-based diet means saying goodbye to all animal products — including lean meat and dairy products such as milk, yogurt, cheese and ice cream. "That's easier said than done ...

  25. Research: Why Inclusive Hiring Must Include Refugees

    From their studies based on conversations with managers, talent leaders, and job seekers from refugee backgrounds, the authors offer 6 ways companies can better recruit from this talent pool ...

  26. Conducting Research in the New Abortion Care Policy Landscape

    Based on a plethora of prior research regarding unintended pregnancy and abortion, reduced access to abortion care is thought to decrease the incidence of abortion but also increase the risk and incidence of myriad adverse maternal and infant health outcomes. 2 Restrictive abortion policies are also expected to increase child poverty, ...

  27. Neural general circulation models for weather and climate

    General circulation models (GCMs) are the foundation of weather and climate prediction1,2. GCMs are physics-based simulators that combine a numerical solver for large-scale dynamics with tuned ...

  28. Learning motif-based graphs for drug-drug interaction ...

    Unexpected drug-drug interactions (DDIs) are important issues for both pharmaceutical research and clinical applications due to the high risk of causing severe adverse drug reactions or drug ...

  29. Research: Consumers Spend Loyalty Points and Cash Differently

    The authors of this article present research findings from their analysis of data describing over 29,000 unique loyalty points earning and spending transactions made during two recent years by 500 ...

  30. Restricted Mean Survival Time Analysis to Estimate SGLT2i-Associated

    Briefly, the KM survival curves of study outcomes for treatment groups over a specific time interval were first plotted based on individual-level time-to-event data and the RMST of each drug group was then estimated based on the area under the KM curve. 18 The time horizon for a given outcome was determined as the minimum of the largest ...