Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 3 Chapter 1: Overview of Intervention/Evaluation Research Approaches

In our prior course, you learned how the nature of an investigator’s research question dictates the type of study approach and design that might be applied to achieve the study aims. Intervention research typically asks questions related to the outcomes of an intervention effort or approach. However, questions also arise concerning implementation of interventions, separate from understanding their outcomes. Practical, philosophical, and scientific factors contribute to investigators’ intervention study approach and design decisions.

In this chapter you learn:

  • how content from our earlier course about study approaches and designs relate to intervention research;
  • additional approaches to intervention research (participatory research; formative, process, outcome, and cost-related evaluation research)
  • intervention research strategies for addressing intervention fidelity and internal validity concerns.

Review and Expansion: Study Approaches

In our earlier course you became familiar with the ways that research questions lead to research approach and methods. Intervention and evaluation research are not different: the question dictates the approach. In the earlier course, you also became familiar with the philosophical, conceptual and practical aspects of different approaches to social work research: qualitative, quantitative, and mixed methods. These methods are used in research for evaluating practice and understanding interventions, as well. The primary emphasis in this module revolves around quantitative research designs for practice evaluation and understanding interventions. However, taking a few moments to examine qualitative and mixed methods in these applications is worthwhile. Additionally, we introduce forms of participatory research—something we did not discuss regarding efforts to understand social work problems and diverse populations. Participatory research is an approach rich in social work tradition.

Qualitative methods in intervention & evaluation research.

The research questions asked by social workers about interventions often lend themselves to qualitative study approaches. Here are 5 examples.

  • Early in the process of developing an intervention, social workers might simply wish to create a rich description of the intervention, the contexts in which it is being delivered, or the clients’ experience with the intervention. This type of information is going to be critically important in developing a standardized protocol which others can use in delivering the intervention, too. Remember that qualitative methods are ideally suited for answering exploratory and descriptive questions.
  • Qualitative methods are well-suited to exploring different experiences related to diversity—the results retain individuality arising from heterogeneity rather than homogenizing across individuals to achieve a “normative” picture.
  • Qualitative methods are often used to assess the degree to which the delivery of an intervention adheres to the procedures and protocol originally designed and empirically tested. This is known as an intervention fidelity issue (see the section below on the topic of process evaluation).
  • Intervention outcomes are sometimes evaluated using qualitative approaches. For example, investigators wanted to learn from adult day service participants what they viewed as the impact of the program on their own lives (Dabelko-Schoeny & King, 2010). The value of such information is not limited to evaluating this one program. Evaluators are informed about important evaluation variables to consider in their own efforts to study interventions delivered to older adults—variables beyond the typical administrative criteria of concern. The study participants identified social connections, empowering relationships with staff, and enjoyment of activities as important evaluation criteria.
  • Assessing the need for intervention (needs assessment) is often performed with qualitative approaches, especially focus groups, open-ended surveys, and GIS mapping.
  • Qualitative approaches are an integral aspect of mixed-methods approaches.

Qualitative approaches often involve in-depth data from relatively few individuals, seeking to understand their individual experiences with an intervention. As such, these study approaches are relatively sensitive to nuanced individual differences—differences in experience that might be attributed to cultural, clinical, or other demographic diversity. This is true, however, only to the extent that diversity is represented among study participants, and individuals cannot be presumed to represent groups or populations.

Sketch of silhouettes of different people in a variety of colors

Quantitative methods in intervention & evaluation research.

Many intervention and evaluation research questions are quantitative in nature, leading investigators to adopt quantitative approaches or to integrate quantitative approaches in mixed methods research. In these instances, “how much” or “how many” questions are being asked, questions such as:

  • how much change was associated with intervention;
  • how many individuals experienced change/achieved change goals;
  • how much change was achieved in relation to the resources applied;
  • what trends in numbers were observed.

Many study designs detailed in Chapter 2 reflect the philosophical roots of quantitative research, particularly those designed to zero in on causal inferences about intervention—the explanatory research designs. Quantitative approaches are also used in descriptive and exploratory intervention and evaluation studies. By nature, quantitative studies tend to aggregate data provided by individuals, and in this way are very different from qualitative studies. Quantitative studies seek to describe what happens “on average” rather than describing individual experiences with the intervention—you learned about central tendency and variation in our earlier course (Module 4). Differences in experience related to demographic, cultural, or clinical diversity might be quantitatively assessed by comparing how the intervention was experienced by different groups (e.g., those who differ on certain demographic or clinical variables). However, data for the groups are treated in the aggregate (across individuals) with quantitative approaches.

Mixed methods in intervention & evaluation research.

Qualitative and quantitative approaches are very helpful in evaluation and intervention research as part of a mixed-methods strategy for investigating the research questions. In addition to the examples previously discussed, integrating qualitative and quantitative approaches in intervention and evaluation research is often done as means of enriching the results derived from one or the other approach. Here are 3 scenarios to consider.

  • Investigators wish to use a two-phase approach in studying or evaluating an intervention. First, they adopt a qualitative approach to inform the design of a quantitative study, then they implement the quantitative study as a second phase. The qualitative phase might help inform any aspect of the quantitative study design, including participant recruitment and retention, measurement and data collection, and presenting study results.
  • Investigators use a two-phase approach in studying or evaluating an intervention. First, they implement a quantitative study. Then, they use a qualitative approach to explore the appropriateness and adequacy of how they interpret their quantitative study results.
  • Investigators combine qualitative and quantitative approaches in a single intervention or evaluation study, allowing them to answer different kinds of questions about the intervention.

For example, a team of investigators applied a mixed methods approach in evaluating outcomes of an intensive experiential learning experience designed to prepare BSW and MSW students to engage effectively in clinical supervision (Fisher, Simmons, & Allen, 2016). BSW students provided quantitative data in response to an online survey, and MSW students provided qualitative self-assessment data. The quantitative data answered a research question about how students felt about supervision, whereas the qualitative data were analyzed for demonstrated development in critical thinking about clinical issues. The investigators concluded that their experiential learning intervention contributed to the outcomes of forming stronger supervisory alliance, BSW student satisfaction with their supervisor, and MSW students thinking about supervision as being more than an administrative task.

hand operated electric mixer

Cross-Sectional & Longitudinal Study Designs.

You are familiar with the distinction between cross-sectional and longitudinal study designs from our earlier course. In that course, we looked at these designs in terms of understanding diverse populations, social work problems, and social phenomena. Here we address how the distinction relates to the conduct of research to understand social work interventions.

  • A cross-sectional study involves data collection at just one point in time. In a program evaluation, for example, the agency might look at some outcome variable at the point when participants complete an intervention or program. Or, perhaps an agency surveys all clients at a single point in time to assess their level of need for a potential new service the agency might offer. Because the data are collected from each person at only one point in time, these are both cross-sectional studies. In terms of intervention studies, one measurement point obviously needs to be after the intervention for investigators to draw inferences about the intervention. As you will see in the discussion of intervention study designs, there exist considerable limitations to using only one single measurement to evaluate an intervention (see post-only designs in Chapter 2).
  • A longitudinal study involves data collection at two or more points in time. A great deal of intervention and evaluation research is conducted using longitudinal designs—answering questions about what changes might be associated with the intervention being delivered. For example, in program evaluation, an agency might compare how clients were functioning on certain variables at the time of discharge compared to their level of functioning at intake to the program. Because the same information is collected from each individual at two points in time (pre-intervention and post-intervention), this is a longitudinal design.
  • Distinguishing cross-section and longitudinal in studies of systems beyond the individual person can become confusing. When social workers intervene with individuals or families or small groups, that longitudinal study involves the same individuals or members at different points in time is evident—perhaps measuring individuals before, immediately after, and months after intervention (this is called follow-up ). However, if an intervention is conducted in a community, a state, or across the nation, the data might not be collected from the same individual persons at each point in time—the unit of analysis is what matters here. For example, if the longitudinal study’s unit of analysis is the 50 states, District of Columbia, and 5 inhabited territories of the United States, data are repeatedly collected at that level (states, DC, and territories), perhaps not from the same individual persons in each of those communities.

an oragne cut in two different ways to illustrate different cross sections

Formative, Process, and Outcome Evaluation

Practice and program evaluation are important aspects of social work practice. It would be nice if we could simply rely on our own sense of what works and what does not. However, social workers are only human and, as we learned in our earlier course, human memory and decisions are vulnerable to bias. Sources of bias include recency, confirmation, and social desirability biases.

  • Recency bias occurs when we place higher emphasis on what has just happened (recently) than on what might have happened in the more distant past. In other words, a social worker might make a casual practice evaluation based on one or two exceptionally good or exceptionally bad recent outcomes rather than a longer, larger history of outcomes and systematic evidence.
  • Confirmation bias occurs when we focus on outcomes that reinforce what we believed, feared, or hoped would happen and de-emphasize alternative events or interpretations that might contradict those beliefs, fears, or hopes.
  • Social desirability bias by practitioners occurs when practice decisions are influenced by a desire to be viewed favorably by others—that could be clients, colleagues, supervisors, or others. In other words, a practice decision might be based on “popular” rather than “best” practices, and casual evaluation of those practices might be skewed to create a favorable impression.

In all three of these forms of bias, the problem is not necessarily intentional, but does result in a lack of sufficient attention to evidence in monitoring one’s practices. For example, relying solely on qualitative comments volunteered by consumers (anecdotal evidence) is subject to a selection bias —individuals with strong opinions or a desire to support the social workers who helped them are more likely to volunteer than the general population of those served.

Thus, it is incumbent on social work professionals to engage in practice evaluation that is as free of bias as possible. The choice of systematic evaluation approach is dictated by the evaluation research question being asked. According to the Centers for Disease Control and Prevention (CDC), there are four most common types of intervention or program evaluation: formative, process, outcome, and impact evaluation ( https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf ). Here, we consider these as three types, combining impact and outcome evaluation into a single category, and we consider an additional category, as well: cost evaluation.

Formative Evaluation.

Formative evaluation is emphasized during the early stages of developing or implementing a social work intervention, as well as following process or outcome evaluation as changes to a program or intervention strategy are considered. The aim of formative evaluation is to understand the context of an intervention, define the intervention, and evaluate feasibility of adopting a proposed intervention or change in the intervention (Trochim & Donnelly, 2007). For example, a needs assessment might be conducted to determine whether the intervention or program is needed, calculate how large the unmet need is, and/or specify where/for whom the unmet need exists. Needs assessment might also include conducting an inventory of services that exist to meet the identified need and where/why a gap exists (Engel & Schutt, 2013). Formative evaluation is used to help shape an intervention, program, or policy.

Formative evaluation process sequence

Process Evaluation.

Investigating how an intervention is delivered or a program operates is the purpose behind process evaluation (Engel & Schutt, 2013). The concept of intervention fidelity was previously introduced. Fidelity is a major point of process evaluation but is not the only point. We know that the greater the degree of fidelity in delivery of an intervention, the more applicable the previous evidence about that intervention becomes in reliably predicting intervention outcomes. As fidelity in the intervention’s delivery drifts or wanes, previous evidence becomes less reliable and less useful in making practice decisions. Addressing this important issue is why many interventions with an evidence base supporting their adoption are manualized , providing detailed manuals for how to implement the intervention with fidelity and integrity. For example, the Parent-Child Interaction Therapy for Traumatized Children (PCIT-TC) treatment protocol is manualized and training certification is available for practitioners to learn the evidence-based skills involved ( https://pcit.ucdavis.edu/ ). This strategy increases practitioners’ adherence to the protocol.

Process evaluation, sometimes called implementation evaluation and sometimes referred to as program monitoring, helps investigators determine the extent to which fidelity has been preserved. But, process evaluation serves other purposes, as well. For example, according to King, Morris and Fitz-Gibbon (1987), process evaluation helps:

  • document details about the intervention that might help explain outcome evaluation results,
  • keep programs accountable (delivering what they claim to deliver),
  • inform planned modifications and changes to the intervention based on evidence.

Process evaluation also helps investigators determine where the facilitators and barriers to implementing an intervention might operate and can help interpret outcomes/results from the intervention, as well. Process evaluation efforts addresses the following:

  • Who delivered the intervention
  • Who received the intervention
  • What was (or was not) done during the intervention
  • When intervention activities occurred
  • Where intervention activities occurred
  • How the intervention was delivered
  • What facilitated implementation with fidelity/integrity
  • What presented as barriers to implementation with fidelity/integrity

For these reasons, many authors consider process evaluation to be a type of formative evaluation.

Process evaluation sequence

Outcome and Impact Evaluation.

The aim of outcome or impact evaluation is to determine effects of the intervention. Many authors refer to this as a type of summative evaluation , distinguishing it from formative evaluation: its purpose is to understand the effects of an intervention once it has been delivered. The effects of interest usually include the extent to which intervention goals or objectives were achieved. An important factor to evaluate concerns positive and negative “side effects”—those unintended outcomes associated with the intervention. These might include unintended impact of the intervention participants or impacts on significant others, those delivering the intervention, the program/agency/institutions involved, and others. While impact evaluation, as described by the CDC, is about policy and funding decisions and longer-term changes, we can include it as a form of outcome evaluation since the questions answered are about achieving intervention objectives. Outcome evaluation is based on the elements presented in the logic model created at the outset of intervention planning.

Process evaluation sequence including early planning intervention planning and conclusion processes

Cost-Related Evaluation.

Social workers are frequently faced with efficiency questions related to the interventions we deliver—thus, cost-related evaluation is part of our professional accountability responsibilities. For example, once an agency has applied the evidence-based practice (EBP) process to select the best-fitting program options for addressing an identified practice concern, program planning is enhanced by information concerning which of the options is most cost-effective.  Here are some types of questions addressed in cost-related evaluation.

cost analysis: How much does it cost to deliver/implement the intervention with fidelity and integrity? This type of analysis typically analyzes monetary costs, converting inputs into their financial impact (e.g., space resources would be converted into cost per square foot, staffing costs would include salary, training, and benefits costs, materials and technology costs might include depreciation).

  • cost-benefit: What are the inputs and outputs associated with the intervention? This type of analysis involves placing a monetary value on each element of input (resources) and each of the outputs. For example, preventing incarceration would be converted to the dollars saved on jail/prison costs; and, perhaps, including the individuals’ ability to keep their jobs and homes which could be lost with incarceration, as well as preventing family members needing public assistance and/or children being placed in foster care if their family member is incarcerated.
  • cost-effectiveness: What is the ratio of cost units (numerator) to outcome units (denominator) associated with delivering an intervention. Outcomes are tied to the intervention goals rather than monetary units. For example, medical interventions are often analyzed in terms of DALYs (disability-adjusted life years)—units designed to indicate “disease burden,” calculated to represent the number of years lost to illness, disability, or premature death (morbidity and mortality). Outcomes might also be numbers of “cases,” such as deaths or hospitalizations related to suicide attempts, drug overdose events, students dropping out from high school, children reunited with their families (family reunification), reports of child maltreatment, persons un- or under-employed, and many more examples. Costs are typically presented as monetary units estimated from a costs analysis. (See http://www.who.int/heli/economics/costeffanalysis/en/ ).
  • cost-utility: A comparison of cost-effectiveness for two or more intervention options, designed to help decision-makers make informed choices between the options.

Two of the greatest challenges with these kinds of evaluation are (1) ensuring that all relevant inputs and outputs are included in the analysis, and (2) realistically converting non-monetary costs and benefits into monetary units to standardize comparisons. An additional challenge has to do with budget structures: the gains might be realized in a different budget than where the costs are borne. For example, implementing a mental health or substance misuse treatment program in jails and prisons costs those facilities; the benefits are realized in budgets outside those facilities—schools, workplaces, medical facilities, family services, and mental health programs in the community. Thus, it is challenging to make decisions based on these analyses when constituents are situated in different systems operating with “siloed” budgets where there is little or no sharing across systems.

Example of silod budgets

An Additional Point.

An intervention or evaluation effort does not necessarily need to be limited to one types. As in the case of mixed-methods approaches, it is sometimes helpful to engage in multiple evaluation efforts with a single intervention or program. A team of investigators described how they used formative, process, and outcome evaluation all in the pursuit of understanding a single preventive public health intervention called VERB, designed to increase physical activity among youth (Berkowitz et al., 2008). Their formative evaluation efforts allowed the team to assess the intervention’s appropriateness for the target audience and to test different messages. The process evaluation addressed fidelity of the intervention during implementation. And, the outcome evaluation led the team to draw conclusions concerning the intervention’s effects on the target audience. The various forms of evaluation utilized qualitative and quantitative approaches.

Participatory Research Approaches

One contrasts previously noted between qualitative and quantitative research is the nature of the investigator’s role. Every effort is made to minimize investigator influence on the data collection and analysis processes in quantitative research. Qualitative research, on the other hand, recognizes the investigator as an integral part of the research process. Participatory research fits into this latter category.

“Participant observation is a method in which natural social processes are studied as they happen (in the field, rather than in the laboratory) and left relatively undisturbed. It is a means of seeing the social world as the research subjects see it, in its totality, and of understanding subjects’ interpretations of that world” (Engel & Schutt, 2013, p. 276).

This quote describes naturalistic observation very well. The difference with participatory observation is that the investigator is embedded in the group, neighborhood, community, institution, or other entity under study. Participatory observation is one approach used by anthropologists to understand cultures from an embedded rather than outsider perspective. For example, this is how Jane Goodall learned about chimpanzee culture in Tanzania: she became accepted as part of the group she observed, allowing her to describe the members’ behaviors and social relationships, her own experiences as a member of the group, and the theories she derived from 55 years of this work. In social work, the participant approach may be used to answer the research questions of the type we explored in our earlier course: understanding diverse populations, social work problems, or social phenomena. The investigator might be a natural member of the group, where the role as group member precedes the role as observer. This is where the term indigenous membership applies: naturally belonging to the group. (The term “indigenous people” describes the native, naturally occurring inhabitants of a place or region.) It is sometimes difficult to determine how the indigenous member’s observations and conclusions might be influenced by his or her position within the group—for example, the experience might be different for men and women, members of different ages, or leaders. Thus, the conclusions need to be confirmed by a diverse membership.

Participant observers are sometimes “adopted” members of the group, where the role of observer precedes their role as group member. It is somewhat more difficult to determine if evidence collected under these circumstances reflects a fully accurate description of the members’ experience unless the evidence and conclusions have been cross-checked by the group’s indigenous members. Turning back to our example with Jane Goodall, she was accepted into the chimpanzee troop in many ways, but not in others—she could not experience being a birth mother to members of the group, for example.

Sometimes investigators are more actively engaged in the life of the group being observed. As previously noted, participant observation is about the processes being left relatively undisturbed (Engel & Schutt, 2013, p. 276).  However, participant observers might be more actively engaged in change efforts, documenting the change process from “inside” the group promoting change. These instances are called participatory action research (PAR) , where the investigator is an embedded member of the group, joining them in making a concerted effort to influence change. PAR involves three intersecting roles: participation in the group, engaging with the action process (planning and implementing interventions), and conducting research about the group’s action process (see Figure 2-1, adapted from Chevalier & Buckles, 2013, p. 10).

Figure 2-1. Venn diagram of participatory action research roles.

Venn diagram of participatory action research roles

For example, Pyles (2015) described the experience of engaging in participatory action research with rural organizations and rural disaster survivors in Haiti following the January 12, 2010 earthquake. The PAR aimed to promote local organizations’ capacity to engage in education and advocacy and to secure much-needed resources for their rural communities (Pyles, 2015, p. 630). According to the author, rural Haitian communities have a history of experience with exploitative research where outsiders conduct investigations without the input or participation of community members, and where little or no capacity-building action occurs based on study results and recommendations. Pyles also raised the point that, “there are multiple barriers impeding the participation of marginalized people” in community building efforts, making PAR approaches even more important for these groups (2015, p. 634).

The term community-based participatory research (CBPR) refers to collaborative partnerships between members of a community (e.g., a group, neighborhood, or organization) and researchers throughout the entire research process. CBPR partners (internal and external members) all contribute their expertise to the process, throughout the process, and share in all steps of decision-making. Stakeholder members of the community (or organization) are involved as active, equal partners in the research process, co-learning by all members of the collaboration is emphasized, and it represents a strengths-focused approach (Harris, 2010; Holkup, Tripp-Reier, Salois, & Weinert, 2004). CBPR is relevant in our efforts to understand social work interventions since the process can result in interventions that are culturally appropriate, feasible, acceptable, and applicable for the community since they emerged from within that community. Furthermore, it is a community empowerment approach whereby self-determination plays a key role and the community is left with new skills for self-study, evaluation, and understanding the change process (Harris, 2010). These characteristics of CBPR help define the approach.

(a) recognizing the community as a unit of identity,

(b) building on the strengths and resources of the community,

(c) promoting colearning among research partners,

(d) achieving a balance between research and action that mutually benefits both science and the community,

(e) emphasizing the relevance of community-defined problems,

(f) employing a cyclical and iterative process to develop and maintain community/ research partnerships,

(g) disseminating knowledge gained from the CBPR project to and by all involved partners, and

(h) requiring long-term commitment on the part of all partners ( Holkup, Tripp-Reier, Salois, & Weinert, 2004, p. 2).

Quinn et al (2017) published a case study of CBPR practices being employed with youth at risk of homelessness and exposure to violence. The authors cited a “paucity of evidence-based, developmentally appropriate interventions” to address the mental health needs of youth exposed to violence (p. 3). The CBPR process helped determine the acceptability of a person-centered trauma therapy approach called narrative exposure therapy (NET). The results of three pilot projects combined to inform the design of a randomized controlled trial (RCT) to study the impact of the NET intervention. The three pilot projects engaged researchers and members of the population to be served (youth at risk of homelessness and exposure to violence). The authors of the case study article discussed some of the challenges of working with youth in the CBPR process and research process. Adapted from Quinn et al (2017), these included:

  • Compliance with federal regulations for research involving minors (defined as “children” in the policies). Compounding this challenge was the vulnerable status of the youth due to their homeless status, and the frequency with which many of the youth were not engaged with any adults who had legal authority to provide consent for them to participate.
  • The team was interdisciplinary, which brings many advantages. However, it also presented challenges regarding different perspectives about how to engage in the varied research processes of participant recruitment and retention, measurement, and intervention.
  • Logistics of conducting focus groups with this vulnerable population. Youth encounter difficulties with participating predictably, and for this vulnerable population the practical difficulties are compounded. They experience complex and often competing demands on their schedules, “including school obligations, court, group or other agency appointments, or childcare,” as well as managing public transportation schedules and other barriers (p. 11). Furthermore, members of the group may have pre-existing relationships and social network ties that can impinge on their comfort with openly sharing their experiences or perspectives in the group setting. They may also have skepticism and reservations about sharing with the adults leading the focus group sessions.

Awareness of these challenges can help CBPR teams develop solutions to overcome the barriers. The CBPR process, while time and resource intensive, can result in appropriate intervention designs for under-served populations where existing evidence is not available to guide intervention planning.

A person sleeping on a bench outside

A somewhat different approach engages members of the community as consultants regarding interventions with which they may be engaged, rather than a fully CBPR approach. This adapted consultation approach presents an important option for ensuring that interventions are appropriate and acceptable for serving the community. However, community members are less integrally involved in the action-related aspects of defining and implementing the intervention, or in the conduct of the implementation research. An example of this important community-as-consultant approach involved a series of six focus group sessions conducted with parents, teachers, and school stakeholders discussing teen pregnancy prevention among high-school aged Latino youth (Johnson-Motoyama et al., 2016). The investigating team reported recommendations and requests from these community members concerning the important role played by parents and potential impact of parent education efforts in preventing teen pregnancy within this population. The community members also identified the importance of comprehensive, empowering, tailored programming that addresses self-respect, responsibility, and “realities,” and incorporates peer role models. They concluded that local school communities have an important role to play in planning for interventions that are “responsive to the community’s cultural values, beliefs, and preferences, as well as the school’s capacity and teacher preferences” (p. 513). Thus, the constituencies involved in this project served as consultants rather than CBPR collaborators. However, the resulting intervention plans could be more culturally appropriate and relevant than intervention plans developed by “outsiders” alone.

interconnected hands with overlayed wordcloud about connection and unity

One main limitation to conducting CBPR work is the immense amount of time and effort involved in developing strong working collaborative relationships—relationships that can stand the test of time. Collaborative relationships are often built from a series of “quick wins” or small successes over time, where the partners learn about each other, learn to trust each other, and learn to work together effectively.

Chapter Summary

This chapter began with a review of concepts from our earlier course: qualitative, quantitative, mixed-methods, cross-sectional and longitudinal approaches. Expanded content about approach came next: formative, process, outcome, and cost evaluation approaches were connected to the kinds of intervention questions social workers might ask, and participatory research approaches were introduced. Issues of cultural relevance were explored, as well. This discussion of approach leads to an expanded discussion of quantitative study design strategies, which is the topic of our next chapter.

Stop and Think

Stop and Think

Social Work 3402 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 9, Issue 8
  • Guidance on how to develop complex interventions to improve health and healthcare
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4033-506X Alicia O"Cathain 1 ,
  • http://orcid.org/0000-0002-3666-6264 Liz Croot 1 ,
  • http://orcid.org/0000-0002-3400-905X Edward Duncan 2 ,
  • Nikki Rousseau 2 ,
  • Katie Sworn 1 ,
  • http://orcid.org/0000-0002-6375-2918 Katrina M Turner 3 ,
  • Lucy Yardley 3 , 4 ,
  • http://orcid.org/0000-0002-4372-9681 Pat Hoddinott 2
  • 1 Medical Care Research Unit, School of Health and Related Research , University of Sheffield , Sheffield , UK
  • 2 Nursing, Midwifery and Allied Health Professional Research Unit , University of Stirling , Stirling , UK
  • 3 School of Social and Community Medicine , University of Bristol , Bristol , UK
  • 4 Psychology , University of Southampton , Southampton , UK
  • Correspondence to Professor Alicia O"Cathain; a.ocathain{at}sheffield.ac.uk

Objective To provide researchers with guidance on actions to take during intervention development.

Summary of key points Based on a consensus exercise informed by reviews and qualitative interviews, we present key principles and actions for consideration when developing interventions to improve health. These include seeing intervention development as a dynamic iterative process, involving stakeholders, reviewing published research evidence, drawing on existing theories, articulating programme theory, undertaking primary data collection, understanding context, paying attention to future implementation in the real world and designing and refining an intervention using iterative cycles of development with stakeholder input throughout.

Conclusion Researchers should consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.

  • intervention development

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjopen-2019-029954

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

There is increasing demand for new interventions as policymakers and clinicians grapple with complex challenges, such as integration of health and social care, risk associated with lifestyle behaviours, multimorbidity and the use of e-health technology. Complex interventions are often required to address these challenges. Complex interventions can have a number of interacting components, require new behaviours by those delivering or receiving the intervention or have a variety of outcomes. 1 An example is a multicomponent intervention to help people stand more at work, including a height adjustable workstation, posters and coaching sessions. 2 Careful development of complex interventions is necessary so that new interventions have a better chance of being effective when evaluated and being adopted widely in the real world. Researchers, the public, patients, industry, charities, care providers including clinicians and policymakers can all be involved in the development of new interventions to improve health, and all have an interest in how best to do this.

The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation. 1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase. 3 This phase was only briefly outlined in the original MRC guidance and requires extension to offer more help to researchers wanting to develop complex interventions. Bleijenberg and colleagues 4 brought together learning from a range of guides/published approaches to intervention development to enrich the MRC framework. 4 There are also multiple sources of guidance to intervention development, embodied in books and journal articles about different approaches to intervention development (for example 5 ) and overviews of the different approaches. 6 These approaches may offer conflicting advice, and it is timely to gain consensus on key aspects of intervention development to help researchers to focus on this endeavour. Here, we present guidance on intervention development based on a consensus study which we describe below. We present this guidance as an accessible communication article on how to do intervention development, which is aimed at readers who are developers, including those new to the endeavour. We do not present it as a ‘research article’ with methods and findings to maximise its use as guidance. Lengthy detail and a long list of references are not provided so that the guidance is focused and user friendly. In addition, the key actions of intervention development are summarised in a single table so that funding panel members and developers can use this as a quick reference point of issues to consider when developing health interventions.

How this guidance was developed

This guidance is based on a study funded by the MRC and the National Institute for Health Research in the UK, with triangulation of evidence from three sources. First, we undertook a review of published approaches to intervention development that offer developers guidance on specific ways to develop interventions 6 and a review of primary research reporting intervention development. The next two phases involved developers and wider stakeholders. Developers were people who had written articles or books detailing different approaches to developing interventions and people who had developed interventions. Wider stakeholders were people involved in the wider intervention development endeavour in terms of being directors of research funding panels, editors of journals that had published intervention development studies, people who had been public and patient involvement members of studies involving intervention development and people working in health service implementation. We carried out qualitative interviews 7 and then we conducted a consensus exercise consisting of two simultaneous and identical e-Delphi studies distributed to intervention developers and wider stakeholders, respectively, and followed this with a consensus workshop. We generated items for the e-Delphi studies based on our earlier reviews and analysis of interview data and asked participants to rate 85 items on a five-point scale from ‘very’ to ‘not important’ using the question ‘when developing complex interventions to improve health, how important is it to’. The distribution of answers to each item is displayed in Appendix 1, and e-Delphi participants are described in Appendix 2. In addition to these research methods, we convened an international expert panel with members from the UK, USA and Europe early in the project to guide the research. Members of this expert panel participated in the e-Delphi studies and consensus workshop alongside other participants.

Framework for intervention development

We base this guidance on expert opinion because there is a research evidence gap about which actions are needed in intervention development to produce successful health interventions. Systematic reviews have been undertaken to determine whether following a specific published approach, or undertaking a specific action, results in effective interventions. Unfortunately, this evidence base is sparse in the field of health, largely due to the difficulty of empirically addressing this question. 8 9 Evidence tends to focus on the use of existing theory within intervention development—for example, the theory of Diffusion of Innovation or theories on behaviour change—and a review of reviews shows that interventions developed with existing theory do not result in more effective interventions than those not using existing theory. 10 The authors of this latter review highlight problems with the evidence base rather than dismiss the possibility that existing theory could help produce successful interventions.

Key principles and actions of intervention development are summarised below. More detailed guidance for the principles and actions is available at https://www.sheffield.ac.uk/scharr/sections/hsr/mcru/indexstudy .

Key principles of intervention development

Key principles of intervention development are that it is dynamic, iterative, creative, open to change and forward looking to future evaluation and implementation. Developers are likely to move backwards and forwards dynamically between overlapping actions within intervention development, such as reviewing evidence, drawing on existing theory and working with stakeholders. There will also be iterative cycles of developing a version of the intervention: getting feedback from stakeholders to identify problems, implementing potential solutions, assessing their acceptability and starting the cycle again until assessment of later iterations of the intervention produces few changes. These cycles will involve using quantitative and qualitative research methods to measure processes and intermediate outcomes, and assess the acceptability, feasibility, desirability and potential unintended harms of the intervention.

Developers may start the intervention development with strong beliefs about the need for the intervention, its content or format or how it should be delivered. They may also believe that it is possible to develop an intervention with a good chance of being effective or that it can only do good not harm. Being open to alternative possibilities throughout the development process may lead to abandoning the endeavour or taking steps back as well as forward. The rationale for being open to change is that this may reduce the possibility of developing an intervention that fails during future evaluation or is never implemented in practice. Developers may also benefit from looking forward to how the intervention will be evaluated so they can make plans for this and identify learning and key uncertainties to be addressed in future evaluation.

Key actions of intervention development

Key actions for developers to consider are summarised in table 1 and explored in more detail throughout the rest of the paper. It may not be possible or desirable for developers to address all these actions during their development process, and indeed some may not be relevant to every problem or context. The recommendation made here is that developers ‘consider the relevance and importance of these actions to their situation both at the start of, and throughout, the development process’.

  • View inline

Framework of actions for intervention development

These key actions are set out in table 1 in what appears to be a sequence. However, in practice, these actions are addressed in a dynamic way. That is, undertaken in parallel and revisited regularly as the intervention evolves, or they interact with each other when learning from one action influences plans for other actions. These actions are explored in more detail below and presented in a logic model for intervention development ( figure 1 ). A logic model is a diagram of how an intervention is proposed to work, showing mechanisms by which an intervention influences the proposed outcomes. 11 The short and long-term effects of successful intervention development were informed by the qualitative interviews with developers and wider stakeholders. 7

  • Download figure
  • Open in new tab
  • Download powerpoint

Logic model for intervention development.

Plan the development process

Understand the problem.

Developers usually start with a problem they want to solve. They may also have some initial ideas about the content, format or delivery of the proposed intervention. The knowledge about the problem and the possibilities for an intervention may be based on: personal experiences of the problem (patients, carers or members of the public); their work (practitioners, policymakers, researchers); published research or theory or discussions with stakeholders. These early ideas about the intervention may be refined and indeed challenged throughout the intervention development process. For example, understanding the problem, priorities for addressing it and the aspects that are amenable to change is part of the development process, and different solutions may emerge as understanding increases. In addition, developers may find that it is not necessary to develop a new intervention because effective or cost-effective ones already exist. It may not be worth developing a new intervention because the potential cost is likely to outweigh the potential benefits or its limited reach could increase health inequalities, or the current context may not be conducive to using it. Health economists may contribute to this debate.

Identify resources—time and funding

Once a decision has been made that a new intervention is necessary, and has the potential to be worthwhile, developers can consider the resources available to them. Spending too little time developing an intervention may result in a flawed intervention that is later found not to be effective or cost-effective or is not implemented in practice, resulting in research waste. Alternatively, spending too much time on development could also waste resources by leaving developers with an outdated intervention that is no longer acceptable or feasible to deliver because the context has changed so much or is no longer a priority. It is likely that a highly complex problem with a history of failed interventions will warrant more time for careful development.

Some funding bodies fund standalone intervention development studies or fund this endeavour as part of a programme of development, piloting and evaluation of an intervention. While pursuing such funding may be desirable to ensure sufficient resource, in practice some developers may not be able to access this funding and may have to fund different parts of the development process from separate pots of money over a number of years.

Applying for funding requires writing a protocol for a study. Funders need detail about the proposed intervention and the development process to make a funding decision. It may feel difficult to specify the intervention and the detail of its development before starting because these will depend on learning occurring throughout the development process. Developers can address this by describing in detail their best guess of the intervention and their planned development process, recognising that both are likely to change in practice. Even if funding is not sought, it may be a good idea to produce a protocol detailing the processes to be undertaken to develop the intervention so that sufficient resources can be identified.

Decide which approach to intervention development to take

A key decision for teams is whether to be guided by one of the many published approaches to intervention development or undertake a more pragmatic self-selected set of actions. A published approach is a guide to the process and methods of intervention development set out in a book, website or journal article. The rationale for using a published approach is that it sets out systematic processes that other developers have found useful. Some published approaches and approaches that developers have used in practice are listed in table 2 . 6 No research has shown that one of these approaches is better than another or that their use always leads to the development of successful interventions. In practice, developers may select a specific published approach because of the purpose of their intervention development, for example, aiming to change behaviour might lead to the use of the Behaviour Change Wheel or Intervention Mapping, in conjunction with the Person Based Approach. Alternatively, selection may depend on developers’ beliefs or values, for example, partnership approaches such as coproduction may be selected because developers believe that users will find the resultant interventions more acceptable and feasible, or they may value inclusive work practices in their own right. Although developers may follow a published approach closely, experts recommend that developers apply these approaches flexibly to fit their specific context. Many of these approaches share the same actions 4 6 and simply place more emphasis on one or a subset of actions. Researchers sometimes combine the use of different approaches in practice to gain the strengths of two approaches, as in the ‘Combination’ category of table 2 .

Different approaches to intervention development

Involve stakeholders throughout the development process

Many groups of people are likely to have a stake in the proposed intervention: the intervention may be aimed at patients or the public, or they may be expected to use the intervention; practitioners may deliver the intervention in a range of settings, for example, hospitals, primary care, community care, social care, schools, communities, voluntary/third sector organisations and users, policy makers or tax payers may pay for the intervention. The rationale for involving relevant stakeholders from the start, and indeed working closely with them throughout, is that they can help to identify priorities, understand the problem and help find solutions that may make a difference to future implementation in the real world.

There are many ways of working with stakeholders and different ways may be relevant for different stakeholders at different times during the development process. Consultation may sometimes be appropriate, where a one-off meeting with a set of stakeholders helps developers to understand the context of the problem or the context in which the intervention would operate. Alternatively, the intervention may be designed closely with stakeholders using a coproduction process, where stakeholders and developers generate ideas about potential interventions and make decisions together throughout the development process about its content, format, style and delivery. 12 This could involve a series of workshops and meetings to build relationships over time to facilitate understanding of the problem and generation of ideas for the new intervention. Coproduction rather than consultation is likely to be important when buy-in is needed from a set of stakeholders to facilitate the feasibility, acceptability and engagement with the intervention or the health problem or context is particularly complex. Coproduction involves stakeholders in this decision-making, whereas with consultation, decisions are made by the research team. Stakeholders’ views may also be obtained through qualitative interviews, surveys and stakeholder workshops, with methods tailored to the needs of each stakeholder. Innovative activities can be used to help engage stakeholders, for example: creative sessions facilitated by a design specialist might involve imagining what versions of the new intervention might look like if designed by various well-known global manufacturers or creating a patient persona to help people think through the experiences of receiving an intervention. As well as participating in developing the intervention, stakeholders can help to shape the intervention development process itself. Members of the public, patients and service users are key stakeholders, and experts recommend planning to integrate their involvement into the intervention development process from the start.

Bring together a team and establish decision-making processes

Developers may choose to work within any size of team. Small teams can reach out to stakeholders at different points in the development process. Alternatively, large teams may include all the necessary expertise. Experts recommend including: experts in the problem to be addressed by the intervention; individuals with a strong track record in developing complex interventions; a behaviour change scientist when the intervention aims to change behaviour and people who are skilled at maximising engagement of stakeholders. Other possible team members include experts in evaluation methods and economics. Within a coproduction approach to development, key stakeholders participate as equal partners with researchers. Large teams can generate ideas and ensure all the relevant skills are available but may also increase the risk of conflicting views and difficulties when making decisions about the final intervention. There is no consensus on the size of team to have, but experts think it is important to agree a process for making decisions. In particular, experts recommend that team members understand their roles, rights and responsibilities; document the reasons for decisions made and are prepared to test different options where there are team disagreements.

Review published research evidence

Reviewing published research evidence before starting to develop an intervention can help to define the health problem and its determinants, understand the context in which the problem exists, clarify who the intervention should be aimed at, identify whether effective or cost-effective interventions already exist for the target population/setting/problem, identify facilitators and barriers to delivering interventions in this context and identify key uncertainties that need to be addressed using primary data collection. Continuing to review evidence throughout the process can help to address uncertainties that arise, for example, if a new substantive intervention component is proposed then the research evidence about it can be explored. Evidence can change quickly, and keeping up with it by reviewing literature can alert developers to new relevant interventions that have been found to be effective or cost-effective. Developers may be tempted to look for evidence that supports existing ideas and plans, but should also look for, and take into account, evidence that the proposed intervention may not work in the way intended. Undertaking systematic reviews is not always necessary because there may be recent relevant reviews available, nor is it always possible in the context of tight resources available to the development team. However, undertaking some review is important for ensuring that there are no existing interventions that would make the one under development redundant.

Draw on existing theories

Some developers call their approaches to intervention development ‘theory based’ when they draw on psychological, sociological, organisational or implementation theories, or frameworks of theories, to inform their intervention. 6 The rationale for drawing on existing theories is that they can help to identify what is important, relevant and feasible to inform the intended goals of the intervention 13 and inform the content and delivery of any intervention. It may be relevant to draw on more than one existing theory. Experts recommend considering which theories are relevant at the start of the development process. However, the use of theories may need to be kept under scrutiny since in practice some developers have found that their selected theory proved difficult to apply during the development process.

Articulate programme theory

A programme theory describes how a specific intervention is expected to lead to its effects and under what conditions. 14 It shows the causal pathways between the content of the intervention, intermediate outcomes and long-term goals and how these interact with contextual factors. Articulating programme theory at the start of the development process can help to communicate to funding agencies and stakeholders how the intervention will work. Existing theories may inform this programme theory. Logic models can be drawn to communicate different parts of the programme theory such as the causes of a problem, or the mechanisms by which an intervention will achieve outcomes, to both team members and external stakeholders. Figure 1 is an example of a logic model. The programme theory and logic models are not static. They should be tested and refined throughout the development process using primary and secondary data collection and stakeholder input. Indeed, they are advocated for use in process evaluations alongside outcome evaluations in the recent MRC Guidance on process evaluation. 15

Undertake primary data collection

Primary data collection, usually involving mixed methods, can be used for a range of purposes throughout the intervention development process. Reviewing the evidence base may identify key uncertainties that primary data collection can then address. Non-participant observation can be used to understand the setting in which the intervention will be used. Qualitative interviews with the target population or patient group can identify what matters most to people, their lived experience or why people behave as they do. ‘Verbal protocol’, which involves users of an intervention talking aloud about it as they use it, 16 can be undertaken to understand the usability of early versions of the intervention. Pretest and post-test measures may be taken of intermediate outcomes to begin early testing of some aspects of the programme theory, an activity that will continue into the feasibility and evaluation phases of the MRC framework and may lead to changes to the programme theory. Surveys, discrete choice experiments or qualitative interviews can be used to assess the acceptability, values and priorities of those delivering and receiving the intervention.

Understand the context

Recent guidance on context in population health intervention research identifies a breadth of features including those relating to population and individuals; physical location or geographical setting; social, economic, cultural and political influences and factors affecting implementation, for example, organisation, funding and policy. 17 An important context is the specific setting in which the intervention will used, for example, within a busy emergency department or within people’s homes. The rationale for understanding this context, and developing interventions which can operate within it, is to avoid developing interventions that fail during later evaluation because too few people deliver or use them. Context also includes the wider complex health and social care, societal or political systems within which any intervention will operate. 18 Different approaches can be taken to understand context, including reviews of evidence, stakeholder engagement and primary data collection. A challenge of understanding context is that it may change rapidly over the course of the development process.

Pay attention to future implementation of the intervention in the real world

The end goal of developers or those who fund development is real-world implementation rather than simply the development of an intervention that is shown to be effective or cost-effective in a future evaluation. 7 Many interventions do not lead to change in policy or practice, and it is important that effective interventions inform policy and are eventually used in the real world to improve health and care. To achieve this goal, developers may pay attention early on in the development process to factors that might affect use of the intervention, ‘scale up’ of the intervention for use nationally or internationally, and sustainability. For example, consideration of the cost of the intervention at an early stage, including as stakeholders official bodies or policymakers that would endorse or accredit the intervention or addressing the challenges of training practitioners in delivering the intervention may help its future implementation. Implementation-based approaches to intervention development are listed in table 2 . Some other approaches listed in this table, such as the Normalisation Process Theory, also emphasise implementation in the real world.

Design and refine the intervention

The term ‘design’ is sometimes used interchangeably with the term ‘development’. However, it is useful to see design as a specific creative part of the development process where ideas are generated, and decisions are made about the intervention components and how it will be delivered, by whom and where. Design starts with generation of ideas about the content, format, style and delivery of the proposed intervention. The process of design may use creative ways of generating ideas, for example, using games or physically making rough prototypes. Some teams include experts in design or use designers external to the team when undertaking this action. The rationale for a wide-ranging and creative design process is to identify innovative and workable ideas that may not otherwise have been considered.

After generating ideas, a mock up or prototype of the intervention or a key component may be created to allow stakeholders to offer views on it. Once an early version or prototype of the intervention is available, it can be refined (sometimes called optimised) using a series of rapid iterations where each iteration includes an assessment of how acceptable, feasible and engaging the intervention is, leading to cycles of refinements. The programme theory and logic models are important at this point, and developers may test whether some of their proposed mechanisms of action are impacting on intermediate outcomes if statistical power allows. The rationale for spending time on multiple iterations is that problems can be identified and solutions found prior to any expensive future feasibility or evaluation phase. Some experts take a quantitative approach to optimisation of an intervention, specifically the Multiphase Optimization Strategy in table 2 , but not all experts agree that this is necessary.

End the development phase

Seeing this endeavour as a discrete ‘intervention development phase’ that comes to an end may feel artificial. In practice, there is overlap between some actions taken in the development phase and the feasibility phase of the MRC framework, 1 such as consideration of acceptability and some measurement of change in intermediate outcomes. Developers may return to the intervention development phase if findings from the feasibility phase identify significant problems with the intervention. In many ways, development never stops because developers will continue to learn about the intervention, and refine it, during the later pilot/feasibility, evaluation and implementation phases. The intention may be that some types of intervention continuously evolve during evaluation and implementation, which may reduce the amount of time spent on the development phase. However, developers need to decide when to stop that first intensive development phase, either in terms of abandoning the intervention because pursuing it is likely to be futile or moving on to the next phase of feasibility/piloting testing or full evaluation. They also face the challenge of convincing potential funders of an evaluation that enough development has occurred to risk spending resources on its pilot or evaluation. The decision to end the development phase may be partly informed by practicalities, such as the amount of time and money available, and partly by the concept of data saturation (used in qualitative research) in that the intensive process stops when few refinements are suggested by those delivering or using the intervention during its period of refinement, or these and other stakeholders indicate that the intervention feels appropriate to them.

At the end of the development process, policymakers, developers or service providers external to the original team may want to implement or evaluate the intervention. Describing the intervention, using one of the relevant reporting guidelines such as the Template for Intervention Description and Replication Checklist 19 and producing a manual or document that describes the training as well as content of the intervention can facilitate this. This information can be made available on a website, and, for some digital interventions, the intervention itself can be made available. It is helpful to publish the intervention development process because it allows others to make links in the future between intervention development processes and the subsequent success of interventions and learn about intervention development endeavours. Publishing failed attempts to develop an intervention, as well as those that produce an intervention, may help to reduce research waste. Reporting multiple, iterative and interacting processes in these articles is challenging, particularly in the context of limited word count for some journals. It may be necessary to publish more than one paper to describe the development if multiple lessons have been learnt for future development studies.

Conclusions

This guidance on intervention development presents a set of principles and actions for future developers to consider throughout the development process. There is insufficient research evidence to recommend that a particular published approach or set of actions is essential to produce a successful intervention. Some aspects of the guidance may not be relevant to some interventions or contexts, and not all developers are fortunate enough to have a large amount of resource available to them, so a flexible approach to using the guidance is required. The best way to use the guidance is to consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.

Supplemental material

Acknowledgments.

This guidance is based on secondary and primary research. Many thanks to participants in the e-Delphis, consensus conference and qualitative interviews, to members of our Expert Panel and to people who attended workshops discussing this guidance. The researchers leading the update of the MRC guidance on developing and evaluating interventions, due to be published later this year, also offered insightful comments on our guidance to facilitate fit between the two sets of guidance.

  • Macintyre S , et al
  • Edwardson CL ,
  • Biddle SJH , et al
  • Hoddinott P
  • Bleijenberg N ,
  • de Man-van Ginkel JM ,
  • Trappenburg JCA , et al
  • Bartholomew Eldredge LK ,
  • Parcel GS ,
  • Kok G , et al
  • O’Cathain A ,
  • Sworn K , et al
  • Turner KM ,
  • Rousseau N ,
  • Croot L , et al
  • Harris R , et al
  • Dalgetty R ,
  • Miller CB ,
  • Dombrowski SU
  • W K Kellogg Foundation
  • Davidoff F ,
  • Dixon-Woods M ,
  • Leviton L , et al
  • Barker M , et al
  • Fonteyn ME ,
  • Kuipers B ,
  • Di Ruggiero E ,
  • Frohlich K , et al
  • Hawkins J , et al
  • Hoffmann TC ,
  • Glasziou PP ,
  • Boutron I , et al

Contributors AOC and PH led the development of the guidance, wrote the first draft of the article and the full guidance document which it describes, and integrated contributions from the author group into subsequent drafts. All authors contributed to the design and content of the guidance and subsequent drafts of the paper (AOC, PH, LY, LC, NR, KMT, ED, KS). The guidance is based on reviews and primary research. AOC led the review of different approaches to intervention development working with KS. LC led the review of primary research working with KS. PH led the qualitative interview study working with NR, KMT and ED. ED led the consensus exercise working with NR. AOC acts as guarantor.

Funding MRC-NIHR Methodology Research Panel (MR/N015339/1). Funders had no influence on the guidance presented here. The authors were fully independent of the funders.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

The Intervention Research Framework: Background and Overview

  • First Online: 02 August 2016

Cite this chapter

intervention plan in research

  • Nyanda McBride 2  

1140 Accesses

This chapter provides an introduction to the Intervention Research Framework. In this chapter the background and development of the Intervention Research Framework is discussed along with an overview of the four phases of the Intervention Research Framework. The first phase of the Intervention Research Framework, the Notification phase, is discussed in greater detail using the SHAHRP study as an example. This section details the purpose of the Notification phase, sources of information that can contribute to the Notification phase and how researchers can identify a gap in their research field that is worthy of further study. The final section of this chapter discusses the value of developing relationships between researchers, policy makers and practitioners for the purposes of intervention research, and how these relationships might be initiated and maintained during each phase of the Intervention Research Framework. Objectives : By the end of this chapter readers will be able to:

Describe the background and development of the Intervention Research Framework

Identify the various phases of the Intervention Research Framework

Recognise descriptive and aetiological sources that can inform the Notification phase of the Intervention Research Framework to assist in identifying a gap in research, policy and/or practice

Describe other notification sources that contribute to building a strength of argument for conducing specific research

Identify how the SHAHRP study incorporated the Notification phase of the Intervention Research Framework in its intervention development and design

Describe the value of researcher and policy/practice professional interactions throughout all phases of the Intervention Research Framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Rothman J. Social R and D: Research and Development in the human services. Englewood Cliffs, NJ: Prentice Hall; 1980.

Google Scholar  

Thomas E. Designing interventions for the helping professions. Beverly Hills, CA: Sage Publications; 1984.

Thomas E, Rothman J, editors. Intervention research. Design and development of human service. New York: Hawthorn Press; 1994.

Holman D. The value of intervention research in health promotion. Presented at the Western Australian health promotion foundation ‘Enriching and improving health promotion research’ seminar, 16th October 1996. Perth, Western Australia. 1996.

Nutbeam D. Achieving best practice in health promotion: imporving the fit between research and practice. Health Educ Res. 1996;11(3):317–26.

Article   Google Scholar  

Nutbeam D. Best research for best health. A university perspective. PowerPoint presentation. UK: University of South Hampton; 2009.

World Health Organisation. Global status report on alcohol and health. Luxembourg, Switzerland: World Health Organisation; 2014.

Haynes R, Kalic R, Griffiths P, McGregor C, Gunnell A. Australian school student alcohol and drug survey: alcohol report 2008—Western Australian results. Drug and alcohol office surveillance report: number 2. Perth: Drug and Alcohol Office; 2010.

Australian Government Department of Health and Ageing. Australian secondary school students’ use of tobacco, alcohol, and over-the counter and illicit substances in 2011. Canberra: Australian Government Department of Health and Ageing; 2009.

Victorian Drug and Alcohol Prevention Council. 2009 victorian youth alcohol and drug survey. Final report. Melbourne: Victorian Drug and Alcohol Prevention Council; 2010.

Australian Institute of Health and Welfare. 2010 national drug strategy household survey report. Canberra: Australian Institute of Health and Welfare; 2011.

Chikritzhs T, Jonas H, Stockwell T, Heale P, Dietze P. Mortality and life-years lost due to alcohol: a comparison of acute and chronic causes. Med J Aust. 2011;174:281–4.

Coleman L, Carter S. A qualitative study of the relationship between alcohol consumption and risky sex in adolescents. Arch Sex Behav. 2005;34:649–61.

Neal D, Fromme K. Event-level covariation of alcohol intoxication and behavioral risk during the first year of college. J Consultant Clin Psychol. 2007;75:294–306.

Kodjo C, Auigner P, Ryan S. Prevalence of, and factors associated with, adolescent physical fighting while under the influence of alcohol or drugs. J Adolesc Health. 2004;35(346):e11.

Mattila V, Parkkari J, Lintonen T, Kannus P, Rimpela A. Occurrences of violence and violent-related injuries among 12–18 year old Finns. Scand J Publ Health. 2005;33:307–13.

French M, Maclean J. Underage alcohol use, delinquency, and criminal activity. Health Econ. 2006;15:1261–81.

Bonomo Y, Coffey C, Wolfe R, Lynskey M, Bowes G, Patton G. Adverse outcomes of alcohol use in adolescents. Addiction. 2001;96:1485–96.

Brown S, Tapert S. Adolescence and the trajectory of alcohol use: basic to clinical studies. Annal N Y Acad Sci. 2004;1021:234–44.

Sher L. Alcoholism and suicidal behaviour: a clinical overview. Acta Psychiatrica Scandanavia. 2006;113:13–22.

Shepherd J, Sutherland I, Newcombe R. Relations between alcohol, violence and victimization in adolescence. J Adolesc. 2006;29:539–53.

Guerri CMP. Mechanisms involved in the neurotoxic, cognitive, and neurobehavioural effects of alcohol consumption during adolescence. Alcohol. 2010;44:15–26.

Guedd J. The teen brain: insights from neuroimaging. J Adolesc Health. 2008;42:335–434.

Farringdon F, McBride N, Midford R. The Fine Line: Students’ perceptions of drinking, having fun and losing control. Youth Stud Aust. 2000;19(3):32–8.

Download references

Author information

Authors and affiliations.

Faculty of Health Sciences, National Drug Research Institute, Curtin University, Perth, WA, Australia

Nyanda McBride

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nyanda McBride .

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this chapter

McBride, N. (2016). The Intervention Research Framework: Background and Overview. In: Intervention Research. Springer, Singapore. https://doi.org/10.1007/978-981-10-1011-8_2

Download citation

DOI : https://doi.org/10.1007/978-981-10-1011-8_2

Published : 02 August 2016

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-1009-5

Online ISBN : 978-981-10-1011-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda

Byron j powell, maria e fernandez, nathaniel j williams, gregory a aarons, rinad s beidas, cara c lewis, sheena m mchugh, bryan j weiner.

  • Author information
  • Article notes
  • Copyright and License information

Edited by: Mary Evelyn Northridge, New York University, United States

Reviewed by: Deborah Paone, Independent Researcher, Minneapolis, MN, United States; Christopher Mierow Maylahn, New York State Department of Health, United States

*Correspondence: Byron J. Powell [email protected]

This article was submitted to Public Health Education and Promotion, a section of the journal Frontiers in Public Health

Received 2018 Oct 16; Accepted 2019 Jan 4; Collection date 2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

The field of implementation science was developed to better understand the factors that facilitate or impede implementation and generate evidence for implementation strategies. In this article, we briefly review progress in implementation science, and suggest five priorities for enhancing the impact of implementation strategies. Specifically, we suggest the need to: (1) enhance methods for designing and tailoring implementation strategies; (2) specify and test mechanisms of change; (3) conduct more effectiveness research on discrete, multi-faceted, and tailored implementation strategies; (4) increase economic evaluations of implementation strategies; and (5) improve the tracking and reporting of implementation strategies. We believe that pursuing these priorities will advance implementation science by helping us to understand when, where, why, and how implementation strategies improve implementation effectiveness and subsequent health outcomes.

Keywords: implementation strategies, implementation science, designing and tailoring, mechanisms, effectiveness research, economic evaluation, reporting guidelines

Introduction

Nearly 20 years ago, Grol and Grimshaw ( 1 ) asserted that evidence-based practice must be complemented by evidence-based implementation. The past two decades have been marked by significant progress, as the field of implementation science has worked to develop a better understanding of implementation barriers and facilitators (i.e., determinants) and generate evidence for implementation strategies ( 2 ). In this article, we briefly review progress in implementation science and suggest five priorities for enhancing the impact of implementation strategies. We draw primarily upon the healthcare, behavioral health, and social services literature. While we hope the proposed priorities are applicable to studies conducted in a wide range of contexts, we welcome discussion regarding potential applications and enhancements for contexts outside of healthcare, such as community and public health settings ( 3 ) that often involve different types of stakeholders, interventions, and implementation strategies.

Implementation strategies are methods or techniques used to improve adoption, implementation, sustainment, and scale-up of interventions ( 4 , 5 ). These strategies vary in complexity, from discrete or single component strategies ( 6 , 7 ) such as computerized reminders ( 8 ) or audit and feedback ( 9 ) to multifaceted implementation strategies that combine two or more discrete strategies, some of which have been branded and tested using rigorous designs [e.g., ( 10 , 11 )]. Implementation strategies can target a range of stakeholders ( 12 ) and multilevel contextual factors across different phases of implementation ( 13 – 16 ). For example, strategies can address patient ( 17 ), provider ( 18 ), organizational ( 19 ), community ( 20 , 21 ), policy and financing ( 22 ), or multilevel ( 23 ) factors.

Several taxonomies describe and organize the types of strategies available ( 6 , 7 , 24 – 26 ). Similarly, taxonomies of behavior change techniques ( 27 ) and methods ( 28 ) describe components of strategies at a more granular level. Both types of taxonomies promote a common language, inform implementation strategy development and evaluation by facilitating consideration of various “building blocks” or components of multifaceted and multilevel strategies, and improve the quality of reporting in research and practice.

The evidence base for implementation strategies is steadily developing. Initially, single-component, narrowly focused strategies that were effective in earlier studies were selected in subsequent studies despite differences between the clinical problems and contexts in which they were deployed ( 29 ). That approach was based on the assumption that strategies would be effective independent of the implementation problems being addressed ( 29 ). This “magic bullet” approach has led to limited success ( 30 ), prompting recognition that strategies should be selected or developed based upon a thorough understanding of context, including the causes of quality and implementation gaps, an assessment of implementation determinants, and an understanding of the mechanisms and processes needed to address them ( 29 ).

Evidence syntheses for discrete, multifaceted, and tailored implementation strategies have been conducted. The Cochrane Collaboration's Effective Practice and Organization of Care (EPOC) group has been a leader in this regard, with 132 systematic reviews of strategies such as educational meetings ( 31 ), audit and feedback ( 9 ), printed educational materials ( 32 ), and local opinion leaders ( 33 ). Grimshaw et al. ( 34 ) note that while median absolute effect sizes across implementation strategies are similar (see Table 1 ), the variation in observed effects within each strategy category suggests that effects may vary based upon whether or not they address determinants (barriers and facilitators). Indeed, determinants at multiple levels and phases may signal the need for multifaceted and tailored strategies that address key determinants ( 13 ).

Evidence for common implementation strategies targeting professional behavior change.

Table updated from Grimshaw et al. ( 34 ), and draws upon Cochrane Reviews from the Effective Practice and Organization of Care (EPOC) group ( 38 ) .

While the use of multifaceted and tailored implementation strategies is intuitive and has considerable face validity ( 29 ), the evidence regarding their superiority to single-component strategies has been mixed ( 37 , 39 , 40 ). A review of 25 systematic reviews ( 39 ) found “no compelling evidence that multifaceted interventions are more effective than single-component interventions” (p. 20). Grimshaw et al. ( 34 ) provide one possible explanation, emphasizing that the general lack of an a priori rationale for the selection of components (i.e., discrete strategies) in multifaceted implementation strategies makes it difficult to determine how these decisions were made. They may have been selected thoughtfully to address prospectively identified determinants through theoretically- or empirically-derived change mechanisms, or they may simply be the manifestation of a “kitchen sink” approach. Wensing et al. ( 41 ) offer a complementary perspective, noting that definitions of discrete and multifaceted strategies are problematic. A discrete strategy such as outreach visits may include instruction, motivation, planning of improvement, and technical assistance; thus, it may not be accurate to characterize it as a single-component strategy. Conversely, a multifaceted strategy including educational workshops, educational materials, and webinars may only address provider knowledge and fail to address other important implementation barriers. They propose that multifaceted strategies that truly target multiple relevant implementation determinants could be more effective than single-component strategies ( 41 ).

A systematic review of 32 studies testing strategies tailored to address determinants concluded that tailored approaches to implementation were more effective than no strategy or a strategy not tailored to determinants; however, the methods used to identify and prioritize determinants and select implementation strategies were not often well-described and no specific method has been proven superior ( 37 ). The lack of systematic methods to guide this process is problematic, as evidenced by a review of 20 studies that found that implementation strategies were often poorly conceived, with mismatches between strategies and determinants (e.g., barriers were identified at the team or organizational level, but strategies were not focused on structures and processes at those levels) ( 42 ). A multi-national program of research was undertaken to improve the methods of tailoring implementation strategies ( 43 ), but tailored strategies had little impact on primary and secondary outcomes ( 40 ). Questions remain about the best methods to develop tailored implementation strategies.

Five priorities need to be addressed to increase the public health impact of implementation strategies: (1) enhance methods for designing and tailoring; (2) specify and test mechanisms of change; (3) conduct more effectiveness research on discrete, multifaceted, and tailored strategies; (4) increase economic evaluations; and (5) improve tracking and reporting. Table 2 provides examples of studies that have pursued each priority with rigor.

Five priorities for research on implementation strategies.

Enhance Methods for Designing and Tailoring Implementation Strategies

Implementation strategies are too often designed in an unsystematic manner and fail to address key contextual determinants ( 13 – 16 ). Stakeholders may rely upon inertia (i.e., “we've always done things this way”), one size fits all approaches, or utilize what Martin Eccles has called the ISLAGIATT principle (i.e., “it seemed like a good idea at the time”) ( 53 ). Consequently, strategies are not always well-matched to the contexts in which they are deployed, including the interventions to be implemented, settings, stakeholder preferences, and implementation determinants ( 37 , 42 , 54 ). More rational, systematic approaches to identify and prioritize barriers and link strategies to overcome them are needed ( 37 , 42 , 55 – 57 ). A number of methods have been suggested. Colquhoun and colleagues ( 56 ) found 15 articles with replicable methods for designing strategies to change healthcare professionals' behavior, and Powell et al. ( 55 ) proposed Intervention Mapping ( 58 ), concept mapping ( 59 ), conjoint analysis ( 60 ), and system dynamics modeling ( 61 ) as methods to aid the design, selection, and tailoring of strategies. These methods share common steps (identification of barriers, linking barriers to strategy component selection, use of theory, and user engagement), and have potential to make the process of designing and tailoring implementation strategies more rigorous ( 55 , 56 ). For example, Intervention Mapping is step-by-step approach to developing implementation strategies using a detailed and participatory needs assessment and the identification of implementers, implementation behaviors, determinants, and ultimately, behavior change methods and implementation strategies that influence determinants of implementation behaviors. Some work has been done to compare different methods for assessing determinants ( 62 ); however, several questions remain. How can determinants be accurately and efficiently assessed (ideally leveraging implementation frameworks)? Can perceived and actual determinants be differentiated? What are the best methods for prioritizing determinants that need to be proactively addressed? When should determinant assessment take place given that new challenges are likely to emerge during the course of implementation? Who should be involved in this process? Each of those questions has resource implications. Similarly, questions remain about efficiently linking prioritized determinants to effective and pragmatic implementation strategies. How can causal theory be leveraged or developed to guide the selection of implementation strategies? Can pragmatic tools be developed to systematically link strategies to determinants? Approaches to designing and tailoring implementation strategies should be tested to determine whether they improve implementation and clinical outcomes ( 55 , 56 ). Given that clinical problems, clinical and public health interventions, settings, individuals, and contextual factors are highly heterogeneous, there is much to gain from developing generalizable processes for designing and tailoring strategies.

Specify and Test Mechanisms of Change

Studies of implementation strategies should increasingly focus on establishing the processes and mechanisms by which strategies exert their effects rather than simply establishing whether or not they were effective ( 29 , 63 , 64 ). The National Institutes of Health ( 64 ) provides this guidance:

Wherever possible, studies of dissemination or implementation strategies should build knowledge both on the overall effectiveness of the strategies, as well as “how and why” they work. Data on mechanisms of action, moderators, and mediators of dissemination and implementation strategies will greatly aid decision-making on which strategies work for which interventions, in which settings, and for which populations.

Unfortunately, it is not common that mechanisms are even mentioned, much less tested ( 63 , 65 , 66 ). Williams ( 63 ) emphasizes the need for trials that test a wider range of multilevel mediators of implementation strategies, stronger theoretical links between strategies and hypothesized mediators, improved design and analysis of multilevel mediation models in randomized trials, and an increasing focus on identifying implementation strategies and behavior change techniques that contribute most to improvement. Developing a more nuanced understanding of mechanisms will require researchers to thoroughly assess the context of implementation and describe causal pathways by which strategies exert their effects, moving beyond a broad identification of determinants and articulating mediators, moderators, preconditions, and proximal and distal outcomes ( 67 ). Examples of this type of approach and guidance for their development can be found in Lewis et al. ( 67 ), Weiner et al. ( 23 ), Bartholomew et al. ( 58 ), and Highfield et al. ( 44 ). Additionally, drawing more heavily upon theory ( 66 , 68 , 69 ), using research designs that maximize ability to make causal inferences ( 70 , 71 ), leveraging methods that capture and reflect the complexity of implementation such as systems science ( 61 , 72 , 73 ) and mixed methods ( 74 – 76 ) approaches, and adhering to methods standards for studies of complex interventions ( 77 ) will help to sharpen our understanding of how implementation strategies engage hypothesized mechanisms. Work to link implementation strategies and behavior change techniques to hypothesized mechanisms is underway ( 67 , 78 ), which promises to improve our understanding of how, when, where, and why implementation strategies are effective.

Conduct More Effectiveness Research on Discrete, Multi-faceted, and Tailored Implementation Strategies

There is a need for more and better effectiveness research on discrete, multifaceted, and tailored implementation strategies using a wider range of innovative designs ( 70 , 79 – 82 ). First, while a number of discrete implementation strategies have been described ( 6 , 7 , 24 , 25 ) and tested ( 38 ), there are gaps in our understanding about how to optimize these strategies. There are over 140 randomized trials of audit and feedback, but Ivers et al. ( 83 ) conclude that there is much to learn about when it will work best and why, and how to design reliable and effective audit and feedback strategies across different settings and providers. Audit and feedback is an example of how complex implementation strategies can be. The ICeBERG group ( 69 ) pointed to the fact that even varying five modifiable elements of audit and feedback (content, intensity, method of delivery, duration, and context) produces 288 potential combinations. These variations matter ( 84 ), and there is a need for tests of audit and feedback and other discrete implementation strategies that include clearly described components that are theoretically and empirically derived, and well-operationalized. The results of these studies could inform the use of discrete strategies and their inclusion in multifaceted strategies.

Second, there is a need for trials that give insight into the sequencing of multifaceted strategies and what to do if the first strategy fails ( 39 ). These strategies could be compared to discrete/single-component implementation strategies or multifaceted strategies of varying complexity and intensity with well-defined components that are theoretically aligned with implementation determinants. These strategies could be tested using MOST, SMART, or other variants of factorial designs that can evaluate the relative impact of various components of multifaceted strategies and inform their sequencing ( 70 , 85 ).

Finally, tests of strategies that are prospectively tailored to different implementation contexts to address specific implementers, implementation behaviors, or determinants are needed ( 37 ). This work could involve comparisons between tailored and non-tailored multifaceted implementation strategies ( 86 ), as well as tests of established and innovative methods that could inform the identification, selection, and tailoring of implementation strategies ( 55 , 56 ).

Increase Economic Evaluations of Implementation Strategies

Few studies include economic evaluations of implementation strategies ( 87 , 88 ). For example, in a systematic review of 235 implementation studies, only 10% provided information about implementation costs ( 87 ). The dearth of economic evaluations severely limits our ability to understand which strategies might be feasible for different contexts, as some decision makers might underestimate the resources required to implement and sustain EBPs, while others might over-estimate them and preemptively limit themselves from implementing EBPs that could benefit their communities ( 89 ). Incorporating economic analyses into studies of implementation strategies would provide decision makers more complete information to guide strategy selection, and would encourage researchers to be more judicious and pragmatic in their design and selection of implementation strategies, narrowing attention to strategies and mechanisms hypothesized to be most essential. If methods for designing and tailoring strategies can be improved such that complex multifaceted strategies are proven superior to single-component or less complex multifaceted strategies ( 39 ) and tailored strategies are proven superior to more standard multifaceted strategies ( 37 , 40 , 43 , 55 ), economic evaluations will be instrumental in demonstrating whether improvements in implementation are worth added costs. Practical tools for integrating economic evaluations within implementation studies have been developed, such as the Costs of Implementing New Strategies (COINS) method ( 89 ) which was developed to address the need for standardized methods for analyzing cost data in implementation research that extend beyond the cost of the clinical intervention itself ( 90 ). For example, the original COINS study presented a head-to-head trial of two implementation approaches; although one approach was significantly more costly, the implementation outcomes achieved were superior enough to warrant the additional resources ( 91 ). Increasing the number and relevance of economic evaluations will require the development of a common framework that promotes comparability across studies ( 88 ).

Improve Tracking and Reporting of Implementation Strategies

Developing a robust evidence base for implementation strategies will require that their use be contemporaneously tracked and that they be reported in the literature with sufficient detail ( 92 ). It is often difficult to ascertain which implementation strategies were used and how they might be replicated. Part of the challenge is the iterative nature of implementation. Even if strategies are meticulously described in a study protocol or trial registry, it is often unrealistic to expect that they will not need to be altered as determinants emerge across implementation phases ( 13 , 93 , 94 ). These changes are likely to occur within and between implementing sites in research studies and applied efforts ( 50 , 51 ), and without rigorous methods for tracking implementation strategy use, efforts to understand what strategies were used and whether or not they were effective are stymied. Even when strategies are reported in study protocols or empirical articles, there are numerous problems with their description, including inconsistent labeling; lack of operational definitions; poor description and absence of manuals to guide their use; and lack of a clear theoretical, empirical, or pragmatic justification for how the strategies were developed and applied ( 4 ). Poor reporting clouds the interpretation of results, precludes replication in research and practice, and limits our ability to synthesize findings across studies ( 4 , 92 ). Findings from systematic reviews illustrate this problem. For example, Nadeem et al. ( 95 ) review of learning collaboratives concluded that, “reporting on specific components of the collaborative was imprecise across articles, rendering it impossible to identify active quality improvement collaborative ingredients linked to improved care.”

A number of reporting guidelines could be leveraged to improve descriptions of strategies ( 4 , 96 – 100 ). Proctor et al. ( 4 ) recommend that researchers name and define strategies in ways that are consistent with the published literature, and carefully operationalize the strategy by specifying: (1) actor(s) , (2) action(s) , (3) action target(s) , (4) temporality , (5) dose , (6) implementation outcomes affected , and (7) theoretical, empirical, or pragmatic justification . Specifying strategies in this way has the potential to increase our understanding of not only which strategies are most effective, but more importantly, the processes and mechanisms by which they exert their effects ( 29 , 67 ). Additional options that provide structured reporting recommendations include the Workgroup for Intervention Development and Evaluation Research (WIDER) recommendations ( 99 , 100 ), the Simplified Framework ( 96 ) and its extension [AIMD; ( 97 )], and the Template for Intervention Description and Replication (TIDieR) checklist ( 98 ). Though not specific to the reporting of implementation strategies, the Standards for Reporting Implementation Studies ( 101 ) and Neta et al. ( 102 ) reporting framework emphasizes how critical it is to report on the multilevel context of implementation. The use of any of the existing guidelines would enhance the clarity of strategy description. We believe that developing approaches to tracking implementation strategies ( 50 , 51 ), and assessing the extent to which they are pragmatic (e.g., acceptable, compatible, easy, and useful) for both research and applied efforts is a high priority. Further, efficient ways of linking empirical studies with study protocols to gauge the degree to which strategies have been adapted or tailored over the course of an implementation effort would be helpful. Failing to improve the quality of reporting will negate other advances in this area by hindering replication.

Implementation science has advanced considerably, yielding a more robust understanding of implementation strategies. Several resources can inform the use of implementation strategies, including established taxonomies of implementation strategies ( 6 , 7 , 24 , 25 ) and behavior change techniques ( 27 , 28 ), repositories of systematic reviews ( 38 , 103 , 104 ), methods for selecting and tailoring implementation strategies ( 40 , 55 , 56 ), and reporting guidelines that promote replicability ( 4 , 98 – 100 ). Nevertheless, questions remain and further effectiveness research and methodological development are needed to ensure that evidence is effectively translated into public health impact. Advancing these priorities will lead to a better understanding of when, where, why, and how implementation strategies exert their effects ( 29 , 63 ).

Author Contributions

BP conceptualized the paper and wrote the first draft of the manuscript. All other authors contributed to the writing and approved the final manuscript.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Funding. BP was supported by grants and contracts from the NIH, including K01MH113806, R25MH104660, UL1TR002489, R01MH106510, R01MH103310, P30A1050410, and R25MH080916. NW was supported by P50MH113840 from the NIMH. RB was supported by grants from the NIMH through R21MH109878 and P50MH113840. CL was supported by R01MH106510 and R01MH103310 from the NIMH. SM was supported by a Fulbright-Health Research Board Impact Award.

  • 1. Grol R, Grimshaw JM. Evidence-based implementation of evidence-based medicine. Jt Comm J Qual Improv. (1999) 25:503–13. 10.1016/S1070-3241(16)30464-3 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 2. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. (2006) 1:1–3. 10.1186/1748-5908-1-1 [ DOI ] [ Google Scholar ]
  • 3. Vinson CA, Stamatkis KA, Kerner JF. Dissemination and implementation research in community and public health settings. In: Brownson RC, Colditz GA, Proctor EK. editors, Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; (2018). pp. 355–70. [ Google Scholar ]
  • 4. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. (2013) 8:1–11. 10.1186/1748-5908-8-139 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 5. Powell BJ, Garcia K, Fernandez ME. Implementation strategies. In: Chambers D, Vinson C, Norton W. editors. Optimizing the Cancer Control Continuum: Advancing Implementation Research. New York NY: Oxford University Press; (2019). pp. 98–120. [ Google Scholar ]
  • 6. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. (2012) 69:123–57. 10.1177/1077558711430690 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 7. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. (2015) 10:1–14. 10.1186/s13012-015-0209-1 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 8. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw JM. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. (2009) D001096:1–68. 10.1002/14651858.CD001096.pub2 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 9. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. (2012) CD000259:1–227. 10.1002/14651858.CD000259 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 10. Glisson C, Schoenwald S, Hemmelgarn A, Green P, Dukes D, Armstrong KS, et al. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. J Consult Clin Psychol. (2010) 78:537–50. 10.1037/a0019160 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 11. Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. (2015) 10:1–12. 10.1186/s13012-014-0192-y [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 12. Chambers DA, Azrin ST. Partnership: a fundamental component of dissemination and implementation research. Psychiatr Serv. (2013) 64:509–11. 10.1176/appi.ps.201300032 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 13. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health Ment Health Serv Res. (2011) 38:4–23. 10.1007/s10488-010-0327-7 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 14. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. (2009) 4:1–15. 10.1186/1748-5908-4-50 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 15. Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. (2013) 8:1–11. 10.1186/1748-5908-8-35 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 16. Benjamin Wolk C, Powell BJ, Beidas RS. Contextual Influences and Strategies for Dissemination and Implementation in Mental Health. New York, NY: Oxford Handjournal Online; (2015). [ Google Scholar ]
  • 17. Gagliardi AR, Légaré F, Brouwers MC, Webster F, Badley E, Straus S. Patient-mediated knowledge translation (PKT) interventions for clinical encounters: a systematic review. Implement Sci. (2016) 11:1–13. 10.1186/s13012-016-0389-3 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 18. Flanagan ME, Ramanujam R, Doebbeling BN. The effect of provider- and workflow-focused strategies for guideline implementation on provider acceptance. Implement Sci. (2009) 4:1–10. 10.1186/1748-5908-4-71 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 19. Wensing M, Laurant M, Ouwens M, Wollersheim H. Organizational implementation strategies for change. In: Grol R, Wensing M, Eccles M, Davis D. editors. Improving Patient Care: The Implementation of Change in Health Care. Chichester, West Sussex: Wiley-Blackwell; (2013). pp. 240–253. [ Google Scholar ]
  • 20. Rabin BA, Glasgow RE, Kerner JF, Klump MP, Brownson RC. Dissemination and implementation research on community-based cancer prevention: a systematic review. Am J Prev Med. (2010) 38:443–56. 10.1016/j.amepre.2009.12.035 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 21. Chinman M, Acosta J, Ebener P, Malone PS, Slaughter ME. Can implementation support help community-based settings better deliver evidence-based sexual health promotion programs? A randomized trial of Getting To Outcomes®. Implement Sci. (2016) 11:78. 10.1186/s13012-016-0446-y [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 22. Wensing M, Eccles M, Grol R. Economic and policy strategies for implementation of change. In: Grol R, Wensing M, Eccles M, Davis D. editors. Improving Patient Care: The Implementation of Change in Health Care. Chichester, West Sussex: Wiley-Blackwell; (2013). pp. 269–277. [ Google Scholar ]
  • 23. Weiner BJ, Lewis MA, Clauser SB, Stitzenberg KB. In search of synergy: strategies for combining interventions at multiple levels. JNCI Monogr. (2012) 44:34–41. 10.1093/jncimonographs/lgs001 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 24. Cochrane Effective Practice and Organisation of Care Group Data Collection Checklist. (2002). Available online at: http://epoc.cochrane.org/sites/epoc.cochrane.org/files/uploads/datacollectionchecklist.pdf
  • 25. Mazza D, Bairstow P, Buchan H, Chakraborty SP, Van Hecke O, Grech C, et al. Refining a taxonomy for guideline implementation: results of an exercise in abstract classification. Implement Sci. (2013) 8:1–10. 10.1186/1748-5908-8-32 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 26. Effective Practice and Organisation of Care (EPOC) EPOC Taxonomy. (2015) Available online at: https://epoc.cochrane.org/epoc-taxonomy
  • 27. Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. (2013) 46:81–95. 10.1007/s12160-013-9486-6 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 28. Kok G, Gottlieb NH, Peters GY, Mullen PD, Parcel GS, Ruiter RAC, et al. A taxonomy of behaviour change methods: an intervention mapping approach. Health Psychol Rev. (2016) 10:297–312. 10.1080/17437199.2015.1077155 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 29. Mittman BS. Implementation science in health care In: Brownson RC, Colditz GA, Proctor EK. editors. Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; pp. 400–418. [ Google Scholar ]
  • 30. Oxman AD, Thomson MA, Davis DA, Haynes B. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. (1995) 153:1424–31. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 31. Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. (2009) 2:CD003030 10.1002/14651858.CD003030.pub2. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 32. Farmer AP, Légaré F, Turcot L, Grimshaw JM, Harvey E, McGowan J, Wolf FM. Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. (2011) 3:CD004398 10.1002/14651858.CD004398.pub2 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 33. Flodgren G, Parmelli E, Doumit G, Gattellari M, O'Brien MA, Grimsshaw J, et a. Local opinion leaders: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. (2011) 8:CD000125 10.1002/14651858.CD000125.pub4 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 34. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. (2012) 7:1–17. 10.1186/1748-5908-7-50 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 35. Giguère A, Légaré F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, et al. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. (2012) 10:CD004398. 10.1002/14651858.CD004398.pub3 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 36. O'Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. (2007) 4:CD000409 10.1002/14651858.CD000409.pub2 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 37. Baker R, Comosso-Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. (2015) 4:1–118. 10.1002/14651858.CD005470.pub3 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 38. Cochrane Collaboration Cochrane Effective Practice Organisation of Care Group. (2013) Available online at: http://epoc.cochrane.org
  • 39. Squires JE, Sullivan K, Eccles MP, Worswick J, Grimshaw JM. Are multifaceted interventions more effective than single component interventions in changing healthcare professionals' behaviours? An overview of systematic reviews. Implement Sci. (2014) 9:152. 10.1186/s13012-014-0152-6 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 40. Wensing M. The Tailored Implementation in Chronic Diseases (TICD) project: introduction and main findings. Implement Sci. (2017) 12:1–4. 10.1186/s13012-016-0536-x [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 41. Wensing M, Bosch M, Grol R. Selecting, tailoring, and implementing knowledge translation interventions. In: Straus S, Tetroe J, Graham ID. editors. Knowledge Translation in Health Care: Moving From Evidence to Practice. Oxford, UK: Wiley-Blackwell; pp. 94–113. [ Google Scholar ]
  • 42. Bosch M, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract. (2007) 13:161–8. 10.1111/j.1365-2753.2006.00660.x [ DOI ] [ PubMed ] [ Google Scholar ]
  • 43. Wensing M, Oxman A, Baker R, Godycki-Cwirko M, Flottorp S, Szecsenyi J, et al. Tailored Implementation for Chronic Diseases (TICD): a project protocol. Implement Sci. (2011) 6:1–8. 10.1186/1748-5908-6-103 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 44. Highfield L, Valeria MA, Fernandez ME, Bartholomew Eldridge K. Development of an implementation intervention using intervention mapping to increase mammography among low income women. Front Public Health (2018) 6:300. 10.3389/fpubh.2018.00300 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 45. Williams NJ, Glisson C, Hemmelgarn A, Green P. Mechanisms of change in the ARC organizational strategy: increasing mental health clinicians' EBP adoption through improved organizational culture and capacity. Adm Policy Ment Health (2017) 44:269–83. 10.1007/s10488-016-0742-5 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 46. Gude WT, Roos-Blom M, van der Veer SN, de Jonge E, Peek N, Dongelmans DA, et al. Electronic audit and feedback intervention with action implementation toolbox to improve pain management in intensive care: protocol for a laboratory experiment and cluster randomised trial. Implement Sci. (2017) 12:1–12. 10.1186/s13012-017-0594-8 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 47. Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. (2014) 9:1–14. 10.1186/s13012-014-0132-x [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 48. Tailored Implementation for Chronic Diseases (2017). Available online at: https://www.biomedcentral.com/collections/TICD
  • 49. Hoomans T, Severens JL. Economic evaluation of implementation strategies in health care. Implement Sci. (2014) 9:1–6. 10.1186/s13012-014-0168-y [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 50. Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. (2017) 15:1–12. 10.1186/s12961-017-0175-y [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 51. Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. (2018) 49:525–37. 10.1016/j.beth.2017.11.012 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 52. Bunger AC, Hanson RF, Doogan NJ, Powell BJ, Cao Y, Dunn J. Can learning collaboratives support implementation by rewiring professional networks? Adm Policy Ment Health Ment Health Serv Res. (2016) 43:79–92. 10.1007/s10488-014-0621-x [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 53. Michie S, Atkins L, Gainforth HL. Changing behaviour to improve clinical practice and policy. In: Dias P, Gonçalves A, Azevedo A, Lobo F. editors. Novos Desafios, Novas Competências: Contributos Atuais da Psicologia, Braga, Portugal: Axioma - Publicações da Faculdade de Filosofia. pp. 41–60. [ Google Scholar ]
  • 54. Powell BJ, Proctor EK. Learning from implementation as usual in children's mental health. Implement Sci. (2016) 11:26–27. 10.1186/1748-5908-8-92 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 55. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. (2017) 44:177–94. 10.1007/s11414-015-9475-6 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 56. Colquhoun HL, Squires JE, Kolehmainen N, Grimshaw JM. Methods for designing interventions to change healthcare professionals' behaviour: a systematic review. Implement Sci. (2017) 12:1–11. 10.1186/s13012-017-0560-5 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 57. Grol R, Bosch M, Wensing M. Development and selection of strategies for improving patient care. In: Grol R, Wensing M, Eccles M, Davis D. editors. Improving Patient Care: The Implementation of Change in Health Care. Chichester: John Wiley & Sons, Inc. pp. 165–184. [ Google Scholar ]
  • 58. Bartholomew Eldridge LK, Markham CM, Ruiter RAC, Fernández ME, Kok G, Parcel GS. Planning Health Promotion Programs: An Intervention Mapping Approach. 4th edition San Francisco, CA: Jossey-Bass, Inc. (2016). [ Google Scholar ]
  • 59. Kane M, Trochim WMK. Concept Mapping for Planning and Evaluation. Thousand Oaks, CA: Sage; (2007). [ Google Scholar ]
  • 60. Farley K, Thompson C, Hanbury A, Chambers D. Exploring the feasibility of conjoint analysis as a tool for prioritizing innovations for implementation. Implement Sci. (2013) 8:1–9. 10.1186/1748-5908-8-56 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 61. Hovmand PS. Community Based System Dynamics. New York, NY: Springer; (2014). [ Google Scholar ]
  • 62. Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, et al. Identifying determinants of care for tailoring implementation in chronic diseases: an evaluation of different methods. Implement Sci. (2014) 9:102. 10.1186/s13012-014-0102-3 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 63. Williams NJ. Multilevel mechanisms of implementation strategies in mental health: integrating theory, research, and practice. Adm Policy Ment Health Ment Health Serv Res. (2016) 43:783–98. 10.1007/s10488-015-0693-2 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 64. National Institutes of Health Dissemination and Implementation Research in Health (R01). Bethesda, MD: National Institutes of Health; (2016). Available online at: http://grants.nih.gov/grants/guide/pa-files/PAR-16-238.html [ Google Scholar ]
  • 65. Edmondson D, Falzon L, Sundquist KJ, Julian J, Meli L, Sumner JA, et al. A systematic review of the inclusion of mechanisms of action in NIH-funded intervention trials to improve medication adherence. Behav Res Ther. (2018) 101:12–9. 10.1016/j.brat.2017.10.001 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 66. Williams NJ, Beidas RS. The state of implementation science in child psychology and psychiatry: a review and suggestions to advance the field. J Child Psychol Psychiatry (2018). 10.1111/jcpp.12960. [Epub ahead of print]. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 67. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health (2018) 6:1–6. 10.3389/fpubh.2018.00136 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 68. Grol R, Bosch MC, Hulscher MEJL, Eccles MP, Wensing M. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. (2007) 85:93–138. 10.1111/j.1468-0009.2007.00478.x [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 69. The Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG) Designing theoretically-informed implementation interventions. Implement Sci. (2006) 1:1–8. 10.1186/1748-5908-1-4 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 70. Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health (2017) 38:1–22. 10.1146/annurev-publhealth-031816-044215 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 71. Brown CH, Have TRT, Jo B, Dagne G, Wyman PA, Muthen B, et al. Adaptive designs for randomized trials in public health. Annu Rev Public Health (2009) 30:1–25. 10.1146/annurev.publhealth.031308.100223 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 72. Burke JG, Lich KH, Neal JW, Meissner HI, Yonas M, Mabry PL. Enhancing dissemination and implementation research using systems science methods. Int J Behav Med. (2015) 22:283–91. 10.1007/s12529-014-9417-3 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 73. Zimmerman L, Lounsbury D, Rosen C, Kimerling R, Trafton J, Lindley S. Participatory system dynamics modeling: increasing engagement and precision to improve implementation planning in systems. Adm Policy Ment Health Ment Health Serv Res. (2016) 43:834–49. 10.1007/s10488-016-0754-1 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 74. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed methods designs in implementation research. Adm Policy Ment Health Ment Health Serv Res. (2011) 38:44–53. 10.1007/s10488-010-0314-z [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 75. Aarons GA, Fettes DL, Sommerfeld DH, Palinkas LA. Mixed methods for implementation research: application to evidence-based practice implementation and staff turnover in community-based organizations providing child welfare services. Child Maltreat. (2012) 17:67–79. 10.1177/1077559511426908 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 76. Alexander JA, Hearld LR. Methods and metrics challenges of delivery-systems research. Implement Sci. (2012) 7:15. 10.1186/1748-5908-7-15 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 77. Patient Centered Outcomes Research Institute Standards for Studies of Complex Interventions. Washington, DC: Patient-Centered Outcomes Research Institute; (2018). Available online at: https://www.pcori.org/research-results/about-our-research/research-methodology/pcori-methodology-standards?utm_source=Funding+awards%2C+GAO+Board+deadline&utm_campaign=Funding+awards%2C+GAO+Board+deadline&utm_medium=email#Complex [ Google Scholar ]
  • 78. Michie S, Carey RN, Johnston M, Rothman AJ, de Bruin M, Kelly MP, et al. From theory-inspired to theory-based interventions: a protocol for developing and testing a methodology for linking behaviour change techniques to theoretical mechanisms of action. Ann Behav Med. (2016) 52:501–12. 10.1007/s12160-016-9816-6 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 79. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington, DC: The National Academies Press; (2009). [ Google Scholar ]
  • 80. Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, et al. An implementation research agenda. Implement Sci. (2009) 4:1–7. 10.1186/1748-5908-4-18 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 81. Newman K, Van Eerd D, Powell BJ, Urquhart R, Cornelissen E, Chan V, et al. Identifying priorities in knowledge translation from the perspective of trainees: results from an online survey. Implement Sci. (2015) 10:1–4. 10.1186/s13012-015-0282-5 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 82. Mazzucca S, Tabak RG, Pilar M, Ramsey AT, Baumann AA, Kryzer E, et al. Variation in research designs used to test the effectiveness of dissemination and implementation strategies: a review. Front Public Health (2018) 6:1–10. 10.3389/fpubh.2018.00032 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 83. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. (2014) 9:14. 10.1186/1748-5908-9-14 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 84. Hysong SJ. Audit and feedback features impact effectiveness on care quality. Med Care (2009) 47:356–63. 10.1097/MLR.0b013e3181893f6b [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 85. Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. (2007) 32:S112–8. 10.1016/j.amepre.2007.01.022 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 86. Lewis CC, Scott K, Marti CN, Marriott BR, Kroenke K, Putz JW, et al. Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol. Implement Sci. (2015) 10:1–14. 10.1186/s13012-015-0313-2 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 87. Vale L, Thomas R, MacLennan G, Grimshaw J. Systematic review of economic evaluations and cost analyses of guideline implementation strategies. Eur J Health Econ. (2007) 8:111–21. 10.1007/s10198-007-0043-8 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 88. Raghavan R. The role of economic evaluation in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK. editors Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; (2018). p. 89–106. [ Google Scholar ]
  • 89. Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion. Child Youth Serv Rev. (2014) 39:177–82. 10.1016/j.childyouth.2013.10.006 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 90. Ritzwoller DP, Sukhanova A, Gaglio B, Glasgow RE. Costing behavioral interventions: a practical guide to enhance translation. Ann Behav Med. (2009) 37:218–27. 10.1007/s12160-009-9088-5 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 91. Brown CH, Chamberlain P, Saldana L, Padgett C, Wang W, Cruden G. Evaluation of two implementation strategies in 51 child county public service systems in two states: results of a cluster randomized head-to-head implementation trial. Implement Sci. (2014) 9:1–15. 10.1186/s13012-014-0134-8 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 92. Michie S, Fixsen DL, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. (2009) 4:1–6. 10.1186/1748-5908-4-40 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 93. Dunbar J, Hernan A, Janus E, Davis-Lameloise N, Asproloupos D, O'Reilly S, et al. Implementation salvage experiences from the Melbourne diabetes prevention study. BMC Public Health (2012) 12:806. 10.1186/1471-2458-12-806. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 94. Hoagwood KE, Chaffin M, Chamberlain P, Bickman L, Mittman B. Implementation salvage strategies: maximizing methodological flexibility in children's mental health research. In: 4th Annual NIH Conference on the Science of Dissemination and Implementation. Washington, DC (2011). [ Google Scholar ]
  • 95. Nadeem E, Olin S, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q (2013) 91:354–94. 10.1111/milq.12016 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 96. Colquhoun H, Leeman J, Michie S, Lokker C, Bragge P, Hempel S, et al. Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implement Sci. (2014) 9:51. 10.1186/1748-5908-9-51 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 97. Bragge P, Grimshaw JM, Lokker C, Colquhoun H, The AIMD Writing/Working Group. AIMD - a validated, simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. BMC Med Res Methodol. (2017) 17:38. 10.1186/s12874-017-0314-8 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 98. Hoffman TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ (2014) 348:g1687 10.1136/bmj.g1687 [ DOI ] [ PubMed ] [ Google Scholar ]
  • 99. Workgroup for Intervention Development and Evaluation Research WIDER Recommendations to Improve Reporting of the Content of Behaviour Change Interventions (2008). Available online at: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-7-70
  • 100. Albrecht L, Archibald M, Arseneau D, Scott SD. Development of a checklist to assess the quality of reporting of knowledge translation interventions using the Workgroup for Intervention Development and Evaluation Research (WIDER) recommendations. Implement Sci. (2013) 8:52. 10.1186/1748-5908-8-52 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 101. Pinnock H, Epiphaniou E, Sheikh A, Griffiths C, Eldridge S, Craig P, et al. Developing standards for reporting implementation studies of complex interventions (StaRI): a systematic review and e-Delphi. Implement Sci. (2015) 10:1–9. 10.1186/s13012-015-0235-z [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 102. Neta G, Glasgow RE, Carpenter CR, Grimshaw JM, Rabin BA, Fernandez ME, et al. A framework for enhancing the value of research for dissemination and implementation. Am J Public Health (2015) 105:49–57. 10.2105/AJPH.2014.302206 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 103. McMaster University Health Systems Evidence (2012). Available online at: http://www.mcmasterhealthforum.org/healthsystemsevidence-en
  • 104. Rx for Change Interventions Database (2011). Available online at: https://www.cadth.ca/rx-change
  • View on publisher site
  • PDF (427.5 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Better reporting of...

Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide

  • Related content
  • Peer review
  • Tammy C Hoffmann , associate professor of clinical epidemiology 1 ,
  • Paul P Glasziou , director and professor of evidence based medicine 1 ,
  • Isabelle Boutron , professor of epidemiology 2 ,
  • Ruairidh Milne , professorial fellow in public health and director 3 ,
  • Rafael Perera , university lecturer in medical statistics 4 ,
  • David Moher , senior scientist 5 ,
  • Douglas G Altman , professor of statistics in medicine 6 ,
  • Virginia Barbour , medicine editorial director, PLOS 7 ,
  • Helen Macdonald , assistant editor 8 ,
  • Marie Johnston , emeritus professor of health psychology 9 ,
  • Sarah E Lamb , Kadoorie professor of trauma rehabilitation and co-director of Oxford clinical trials research unit 10 ,
  • Mary Dixon-Woods , professor of medical sociology 11 ,
  • Peter McCulloch , clinical reader in surgery 12 ,
  • Jeremy C Wyatt , leadership chair of ehealth research 13 ,
  • An-Wen Chan , Phelan scientist 14 ,
  • Susan Michie , professor 15
  • 1 Centre for Research in Evidence Based Practice, Faculty of Health Sciences and Medicine, Bond University, Queensland, Australia, 4229
  • 2 INSERM U738, Université Paris Descartes-Sorbonne Paris Cité, Paris, France
  • 3 Wessex Institute, University of Southampton, Southampton, UK
  • 4 Department of Primary Care Health Sciences, University of Oxford, UK
  • 5 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 6 Centre for Statistics in Medicine, University of Oxford, UK
  • 7 PLOS, Brisbane, Australia
  • 8 BMJ, London, UK
  • 9 Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
  • 10 Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, University of Oxford, Oxford, UK
  • 11 Department of Health Sciences, University of Leicester, Leicester, UK
  • 12 Nuffield Department of Surgical Science, University of Oxford, Oxford, UK
  • 13 Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
  • 14 Women’s College Research Institute, University of Toronto, Toronto, Canada
  • 15 Centre for Outcomes Research and Effectiveness, Department of Clinical, Educational and Health Psychology, University College London, London, UK
  • Correspondence to: T C Hoffmann thoffmann{at}bond.edu.au
  • Accepted 4 February 2014

Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.

Introduction

The evaluation of interventions is a major research activity, yet the quality of descriptions of interventions in publications remains remarkably poor. Without a complete published description of the intervention, other researchers cannot replicate or build on research findings. For effective interventions, clinicians, patients, and other decision makers are left unclear about how to reliably implement the intervention. Intervention description involves more than providing a label or the ingredients list. Key features—including duration, dose or intensity, mode of delivery, essential processes, and monitoring—can all influence efficacy and replicability but are often missing or poorly described. For complex interventions, this detail is needed for each component of the intervention. For example, a recent analysis found that only 11% of 262 trials of cancer chemotherapy provided complete details of the trial treatments. 1 The most frequently missing elements were dose adjustment and “premedications,” but 16% of trials omitted even the route of drug administration. The completeness of intervention description is often worse for non-pharmacological interventions: one analysis of trials and reviews found that 67% of descriptions of drug interventions were adequate compared with only 29% of non-pharmacological interventions. 2 A recent study of 137 interventions, from 133 trials of non-drug interventions, found that only 39% of interventions were described adequately in the primary paper or any references, appendices, or websites. 3 This increased, albeit to only 59%, by contacting authors for additional information—a task almost no clinicians and few researchers have time to undertake.

The Consolidated Standards of Reporting Trials (CONSORT) 2010 statement 4 currently suggests in item 5 that authors should report on “The interventions for each group with sufficient details to allow replication, including how and when they were actually administered.” This is appropriate advice, but further guidance seems to be needed: despite endorsement of the CONSORT statement by many journals, reporting of interventions is deficient. The problem arises partly from lack of awareness among authors about what comprises a good description and partly from lack of attention by peer reviewers and editors. 5

A small number of CONSORT extension statements contain expanded guidance about describing interventions, such as non-pharmacological interventions, 6 and specific categories of interventions, such as acupuncture and herbal interventions. 7 8 The guidance for content of trial protocols, SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials), provides some recommendations for describing interventions in protocols. 9 More generic and comprehensive guidance is needed along with robust ways to implement such guidance. We developed an extension of item 5 of the CONSORT 2010 statement and item 11 of the SPIRIT 2013 statement in the form of a checklist and guidance entitled TIDieR (Template for Intervention Description and Replication), with the objective of improving the completeness of reporting, and ultimately the replicability, of interventions. This article describes the methods used to develop and obtain consensus for this checklist and, for each item, provides an explanation, elaboration, and examples of good reporting. While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs, such as trials, case-control studies, and cohort studies.

Methods for development of the TIDieR checklist and guide

Development of the checklist followed the methodological framework for developing reporting guidelines suggested by the EQUATOR Network. 10 In collaboration with the CONSORT steering group, we established a TIDieR steering committee (PPG, TCH, IB, RM, RP). The committee generated a list of 34 potential items from relevant CONSORT checklists and checklists for reporting discipline-specific or particular categories of interventions. The group also reviewed other sources of guidance on intervention reporting identified from a thorough search of the literature, followed by a forward and backward citation search (see appendix 1).

We then used a two round modified Delphi consensus survey method 11 involving a broad range of expertise and stakeholders. In the first round, each of the 34 items generated by the steering committee was rated by survey participants as “omit,” “possible,” “desirable,” or “essential” to include in the final checklist. From the first round, some items were reworded and combined, and then the ranked items were divided into three groups for the second round. The first group contained 13 items with the highest rankings (rated as “essential” by ≥70% participants or “essential or desirable” by ≥85%), and participants were advised that these would be included in the checklist unless strong objection to their inclusion was received in the second round. The second group contained 13 items with moderate rankings (“essential or desirable” by ≥65%); participants were asked to rate each of these again as “omit,” “possible,” “desirable,” or “essential.” The third group contained three items with low rankings, and participants were advised that these items would be removed unless strong objection to their omission was received in the second round. In both rounds, participants could also suggest additional items, comment on item wording, or provide general comments.

Delphi participants (n=125) were authors of research on describing interventions, clinicians, authors of existing reporting guidelines, clinical trialists, methodologists or statisticians with expertise in clinical trials, and journal editors (see appendix 2). They were invited by email to complete the two rounds of the web based survey. The response rate was 72% (n=90) for the first round. Only those who completed round one and were willing to participate in round two were invited to participate in round two. The response rate for round two was 86% (74 of 86 invited).

After the two Delphi rounds, 13 items were included in the draft checklist, and 13 moderately rated items were retained for further discussion at the in person meeting. The results of the Delphi survey were reported at a two day consensus meeting on 27-28 March 2013, in Oxford, UK. Thirteen invited experts, representing a range of health disciplines (see author list) and with expertise in the development of trial, methodological, and/or reporting guidelines, attended and are all authors of this paper. The meeting began with a review of the literature on intervention reporting, followed by a report of the Delphi process, the draft checklist of 13 items, and rankings of and comments about the additional 13 moderately rated items. Meeting participants discussed the proposed items and agreed which should be included and the wording of each item.

After the meeting, the checklist was distributed to the participants to ensure it reflected the decisions made, and this explanation and elaboration document was drafted. This was then piloted with 26 researchers who were authoring papers of intervention studies and minor clarifications were made in the elaboration of some items.

Scope of the TIDieR checklist and guide for describing interventions

The overarching purpose of the TIDieR checklist is to prompt authors to describe interventions in sufficient detail to allow their replication. The checklist contains the minimum recommended items for describing an intervention. Authors should provide additional information where they consider it necessary for the replication of an intervention.

Most TIDieR items are relevant for most interventions and applicable to even apparently simple drug interventions, which are sometimes poorly described. 2 If we consider the elements of an evaluation of an intervention—the population, intervention, comparison, outcome (“PICO”)—TIDieR can be seen as a guide for reporting the intervention and comparison (and co-interventions, when relevant) elements of a study. Other elements (such as population, outcomes) and methodological features are covered by CONSORT 2010 or SPIRIT 2013 items for randomised trials and by other checklists (such as the STROBE statement 12 ) for alternate study designs. They have not been duplicated as part of the TIDieR checklist.

The order in which items are presented in the checklist does not necessarily reflect the order in which information should be presented. It might also be possible to combine a number of items from the checklist into one sentence. For example, information about what materials (item 3) and what processes (item 4) can be combined (example 3c).

We emphasise that our definition of “intervention” extends to describing the intervention received by the comparison group/s in a study. Control interventions and co-interventions are often particularly poorly described; “usual care” is not a sufficient description. When a controlled study is reported, authors should describe what participants in the control group received with the same level of detail used to describe the intervention group, within the limits of feasibility. Full understanding of the comparison group care can help to explain the observed efficacy of an intervention, with greater apparent effect sizes being potentially found when control group care is minimal. 13 Describing the care that each group received will usually require the replication of the checklist for each group in a study.

As well as describing which interventions (or control conditions) were delivered to different groups, authors should also explain legitimate variants of the intervention. Authors might find it helpful to locate their trial on the pragmatic explanatory continuum. 14 If, for example in a pragmatic trial, authors expect there to be variants in aspects of the intervention (for instance, in the “usual care” group across various centres), those variants should be described under the appropriate checklist items.

We recognise that limitations (such as format and length) for journals that are only paper based can sometimes preclude inclusion of all intervention information in the primary paper (that is, the paper that is reporting the main results of the intervention evaluation). The information that is prompted by the TIDieR checklist might therefore be reported in locations beyond the primary paper itself, including online supplementary material linked to the primary paper, a published protocol and/or other published papers, or a website. Authors should specify the location of additional detail in the primary paper (for example, “online appendix 2 for the training manual,” “available at www ...,” or “details are in our published protocol”). When websites provide further details, URLs that are designed to remain stable over time are essential.

The TIDieR checklist explanation and elaboration

The items included in the checklist are shown in table 1 ⇓ . The complete checklist is available in appendix 3 and a Word version, which authors and reviewers can fill out, is available on the EQUATOR Network website ( www.equator-network.org/reporting-guidelines/tidier/ ). An explanation for each item is given below, along with examples of good reporting. Citations for the examples are in table 2. ⇓

Items included in the Template for Intervention Description and Replication (TIDieR) checklist: information to include when describing an intervention. Full version of checklist provides space for authors and reviewers to give location of the information (see appendix 3)

  • View inline

List of references for the examples used

Item 1. Brief name: Provide the name or a phrase that describes the intervention

1a. Single . . . dose of dexamethasone

1b. TREAD (TREAtment of Depression with physical activity) study

1c. Internet based, nurse led vascular risk factor management programme promoting self management

Explanation —Precision in the name, or brief description, of an intervention enables easy identification of the type of intervention and facilitates linkage to other reports on the same intervention. Give the intervention name (examples 1a, 1b), explaining any abbreviations or acronyms in full (example 1b), or a short (one or two line) statement of the intervention without elaboration (example 1c).

Item 2. Why: Describe any rationale, theory, or goal of the elements essential to the intervention

2a. Dexamethasone (10 mg) or placebo was administered 15 to 20 minutes before or with the first dose of antibiotic. . . Studies in animals have shown that bacterial lysis, induced by treatment with antibiotics, leads to inflammation in the subarachnoid space, which may contribute to an unfavourable outcome [references]. These studies also show that adjuvant treatment with anti-inflammatory agents, such as dexamethasone, reduces both cerebrospinal fluid inflammation and neurologic sequelae [references]

2b. Self management of oral anticoagulant therapy may result in a more individualised approach, increased patient responsibility, and enhanced compliance, which may lead to improvement in the regulation of anticoagulation

2c. The TPB [Theory of Planned Behaviour] informed the hypothesised mediators of intention and physical activity that were targeted in the intervention program: instrumental and affective attitude, subjective norm and perceived behavioural control

2d. We chose a 5° wedge because greater wedging is less likely to be tolerated by the wearer [reference] and is difficult to accommodate within a normal shoe

Explanation —Inclusion of the rationale, theory, or goals that underpin an intervention, or the components of a complex intervention, 15 can help others to know which elements are essential, rather than optional or incidental. For example, the colour of capsules used in a pharmacological intervention is likely to be an incidental, not essential, contributor to the intervention’s efficacy and hence reporting of this is not necessary. In some reports, the term “active ingredient” is used and refers to the components within an intervention that can be specifically linked to its effect on outcomes such that, if they were omitted, the intervention would be ineffective. 16 The known or supposed mechanism of action of the active component/s of the intervention should be described.

Example 2a illustrates the rationale for treating bacterial meningitis with dexamethasone in addition to an antibiotic. Behaviour change and implementation interventions might require different forms of description, but the basic principles are the same. It might, alongside an account of the components of the intervention, also be appropriate to describe the intervention in terms of its theoretical basis, including its hypothesised mechanisms of action (examples 2b, 2c). 17 18 19 The rationale behind an important element of an intervention can sometimes be pragmatic and relate to acceptability of the intervention by participants (example 2d).

Item 3. What (materials): Describe any physical or informational materials used in the intervention, including those provided to participants or used in intervention delivery or in training of intervention providers. Provide information on where the materials can be accessed (for example, online appendix, URL)

3a. The educational package included a 12-minute cartoon . . . The presentation of the cartoon was complemented by classroom discussions, display of the same poster that was used for the control group [see figure in appendix 4], dissemination of a pamphlet summarising the key messages delivered in the cartoon, and drawing and essay writing competitions to reinforce the messages . . . The cartoon can be accessed at NEJM.org or at [URL provided]. A specific teacher training workshop was held before commencement of the trial (for details, see the protocol, available at NEJM.org)

3b. The intervention group received a behaviour change counselling training programme called the Talking Lifestyle learning programme that took practitioners through a portfolio-driven set of learning activities. Precise details of both intervention content and the training programme can be found in [URL, login and password provided]. . . Box 1 provides a more detailed description of the components of the training programme

3c. The “local” group received a sonographically guided injection of 2 mL (10 mg/mL) triamcinolone (Kenacort-T, Bristol-Myers Squibb) and 5 mL (10 mg/mL) lidocaine hydrochloride (Xylocaine, AstraZeneca) to the subacromial bursa and an intramuscular injection of 4 mL (10 mg/mL) lidocaine hydrochloride to the upper gluteal region

Explanation —A full description of an intervention should describe what different physical and information materials were used as part of the intervention (this typically will not extend to study consent forms unless they provide written instructions about the intervention that are not provided elsewhere). Intervention materials are the most commonly missing element of intervention descriptions. 3 This list of materials can be regarded as comparable with the “ingredients” required for a recipe. It can include materials provided to participants (example 3a), training materials used with the intervention providers (examples 3a, 3b), or the surgical device or pharmaceutical drug used and its manufacturer (example 3c). For some interventions, it might be possible to describe the materials and the procedures (item 4) together (examples 3c, 4c). If the information is too long or complex to describe in the primary paper, alternative options and formats for providing the materials should be used (see appendix 4 for some examples) and details of where they can be obtained (examples 3a, 3b) should be provided in the primary paper.

Item 4. What (procedures): Describe each of the procedures, activities, and/or processes used in the intervention, including any enabling or support activities

4a. The TREPP [transrectus sheath preperitoneal] technique can be performed under spinal anaesthesia. To reach the PPS [preperitoneal space], a 5 cm straight incision is made about 1 cm above the pubic bone. The anterior rectus sheath is opened, as is the underlying fascia transversalis [figure]. After retraction of the muscle fibres medially, the inferior epigastric vein and artery are identified and retracted medially as well

4b. . . . identified a suitable vein for cannulation. The overlying skin was wiped with an alcohol swab and allowed to dry, as per standard operating procedures. The principal investigator then administered the allocated spray from a distance of about 12 cm for two seconds. This technique avoided “frosting up” of vapocoolant on the skin. Liquid spray on the skin was allowed to evaporate for up to 10 seconds. The area was again wiped with an alcohol swab and cannulation proceeded immediately. Cannulation had to be carried out within 15 seconds of administration of the spray

4c. . . . three periods of exercise each lasting 5 min, supervised by a physiotherapist. The first period consisted of 2 min of indoor jogging, 1 min of stair climbing (three floors), and 2 min of cycling on an ergometer. Resistance on the ergometer was adjusted to ensure that the participant’s respiratory rate was elevated during the 2 min of cycling. At the end of the first period, the patient performed several prolonged and brief expiratory flow accelerations with open glottis, the forced expiratory technique, and finally cough and sputum expectoration. These clearance manoeuvres were performed over 1.5 min. The second period consisted of 1 min of stretching repeated five times, followed by the same expiratory manoeuvres for 1.5 min, as described above. The third period consisted of continuous jumping on a small trampoline. It included 2 min of jumping, 2 min of jumping while throwing and catching a ball, and 1 min of jumping while hitting a tossed ball. This was again followed by expiratory manoeuvres for 1.5 min. The entire regimen was followed by 40 min rest

4d. All health workers doing outpatient consultations in the intervention group received text messages about malaria case management for 6 months . . . The key messages addressed recommendations from the Kenyan national malaria guidelines and training manuals [references]

4e. Onsite activities were implemented by hospital personnel responsible for quality improvement initiatives . . . Standard communication channels were used, including group specific computer based training modules and daily electronic documentation by nursing staff for all groups. On-site training in bathing with chlorhexidine-impregnated cloths was provided to hospitals assigned to a decolonisation regimen . . . Nursing directors performed at least three quarterly observations of bathing, including questioning staff about protocol details. Investigators hosted group specific coaching teleconferences at least monthly to discuss implementation, compliance, and any new potentially conflicting initiatives

Explanation— Describe what processes, activities, or procedures the intervention provider/s carried out. Continuing the recipe metaphor used above, this item refers to the “methods” section of a recipe and where intervention materials (“ingredients”) are involved, describes what is to be done with them. “Procedure” can refer to the sequence of steps to be followed (examples 3c, 4b) and is a term used by some disciplines, particularly surgery, and includes, for example, preoperative assessment, optimisation, type of anaesthesia, and perioperative and postoperative care, along with details of the actual surgical procedure used (example 4a). Examples of processes or activities include referral, screening, case finding, assessment, education, treatment sessions (example 4c), telephone contacts (example 4d), etc. Some interventions, particularly complex ones, might require additional activities to enable or support the intervention to occur (in some disciplines these are known as implementation activities), and these should also be described (example 4e). Elaboration about how to report interventions where the procedure is not the same for all participants is provided at item 9 (tailoring).

Item 5. Who provided: For each category of intervention provider (for example, psychologist, nursing assistant), describe their expertise, background and any specific training given

5a. Only female counsellors were included in this rural area, after consultation with the village chiefs, because it would not have been deemed culturally appropriate for men to counsel women without their husband present . . . Selection criteria for lay counsellors included completion of 12 years of schooling, residence in the intervention area, and a history of community work

5b. The procedure is simple, uses existing surgical skills, and has a short learning curve, with the manufacturers recommending at least five mentored cases before independently practising. All surgeons involved in the study will have completed this training and will have carried out over five procedures prior to recruiting to the study

5c. Therapists received at least one day of training specific to the trial from an experienced CBT [cognitive behaviour therapy] therapist and trainer and weekly supervision from skilled CBT supervisors at each centre. . . The intervention was delivered by 11 part time therapists in the three sites who were representative of those working within NHS psychological services [reference]. Ten of the 11 therapists were female, their mean age was 39.2 years (SD 8.1), and they had practised as a therapist for a mean of 9.7 years (8.1) . . . Nine of the 11 therapists delivered 97% of the intervention and, for these nine, the number of patients per therapist ranged from 13 (6%) to 41 (18%)

5d. . . . brief lifestyle counselling was practised with trained actors and tape recorded. The competency of counselling was checked using the behaviour change counselling index [reference]. Only practitioners who reached a required standard (agreed by inter-rater consensus between three independent clinical assessors) were approved to deliver brief lifestyle counselling in the trial

Explanation —The term “intervention provider” refers to who was involved in providing the intervention (for example, by delivering it to recipients or undertaking specific tasks). This is important in circumstances where the providers’ expertise and other characteristics (example 5a) could affect the outcomes of the intervention. Important issues to address in the description might include the number of providers involved in delivering or undertaking the intervention; their disciplinary background (for example, nurse, occupational therapist, colorectal surgeon, expert patient); what pre-existing specific skills, expertise, and experience providers required and if and how these were verified; details of any additional training specific to the intervention that needed to be given to providers before (example 3b) and/or during the study (example 5c); and if competence in delivering the intervention was assessed before (example 5d) or monitored throughout the study and whether those deemed lacking in competence were excluded (example 5d) or retrained. Other information about providers could include whether the providers were doing the intervention as part of their normal role (example 3b) or were specially recruited as providers for purposes of the study (example 5c); whether providers were reimbursed for their time or provided with other incentives (if so, what) to deliver the intervention as part of the study, and whether such time or incentives might be needed to replicate the intervention.

Item 6. How: Describe the modes of delivery (such as face to face or by some other mechanism, such as internet or telephone) of the intervention and whether it was provided individually or in a group

6a. . . . sessions . . . held weekly and facilitated in groups of 6-12 by . . .

6b. Drugs were delivered by . . . members of the [Reproductive and Child Health] trekking teams . . . teams visited each of the study villages . . .

6c. The text messaging intervention, SMS Turkey, provided six weeks of daily messages aimed at giving participants skills to help them quit smoking. Messages were sent in an automated fashion, except two days and seven days after the initial quit day

6d. . . . made their own appointments online . . . Participants and therapists typed free text into the computer, with messages sent instantaneously; no other media or means of communication were used

6e. . . . three 1 hour home visits (televisits) by a trained assistant . . . ; participants’ daily use of an in-home messaging device . . .… that was monitored weekly by the teletherapist; and five telephone intervention calls between the teletherapist and the participant . . .

Explanation —Specify whether the intervention was provided to one participant at a time (such as a surgical intervention) or to a group of participants and, if so, the group size (example 6a). Also describe whether it was delivered face to face (example 6b), by distance (such as by telephone, surface mail, email, internet, DVD, mass media campaign, etc) as in examples 6c, 6d, or a combination of modes (example 6e). When relevant, describe who initiated the contact (example 6c), and whether the session was interactive (example 6d) or not (example 6c), and any other delivery features considered essential or likely to influence outcome.

Item 7. Where: Describe the type(s) of location(s) where the intervention occurred, including any necessary infrastructure or relevant features

7a. . . . medication . . . and a spacer (as appropriate) were delivered to the school nurse for directly observed therapy on the days on which the child attended school. . . An additional canister of preventive medication was delivered to the child’s home to use on weekends and other days the child did not attend school, and the child’s caregiver was shown proper administration technique

7b. Women were recruited from three rural and one peri-urban antenatal clinic in Southern Malawi . . . tablets were taken under supervision at the clinic

7c. . . . participants for the . . telehealth trial, across three sociodemographically distinct regions in England (rural Cornwall, rural and urban Kent, and urban Newham in London) comprising four primary care trusts. . . Control participants had no telehealth or telecare equipment installed their homes for the duration of the study. A Lifeline pendant (a personal alarm) plus a smoke alarm linked to a monitoring centre were not, on their own, sufficient to classify as telecare for current purposes

7d. Most births in African countries occur at home, especially in rural areas . . . They identified pregnant women and made five home visits during and after pregnancy . . . Peer counsellors lived in the same communities, so informal contacts to make arrangements for visits were common. . . counsellors were . . . given a bicycle, T shirt. . .

7e. This paper contains a box, titled “Key features of healthcare systems in Northern Ireland and Republic of Ireland,” which summarises relevant aspects of general practices such as funding, registration, and access to free prescriptions

Explanation —In some studies the intervention can be delivered in the same location where participants were recruited and/or data were collected and details might therefore already be included in the primary paper (for example, as in item 4b of CONSORT 2010 statement if reporting a trial). If, however, the intervention occurred in different locations, this should be specified. At its simplest level, the location might be, for example, in the participants’ home (example 7a), residential aged care facility, school (example 7a), outpatient clinic (example 7b), inpatient hospital room, or a combination of locations (example 7a). Features or circumstances about the location can be relevant to the delivery of the intervention and should be described (examples 7e). For example, they might include the country (example 7b), type of hospital or primary care (example 7c), publicly or privately funded care, volume of activity, details of the healthcare system, or the availability of certain facilities or equipment (examples 7c, 7d, 7e). These features can impact on various aspects of the intervention such as its feasibility (example 7d) or provider or participant adherence and are important for those considering replicating the intervention.

Item 8. When and how much: Describe the number of times the intervention was delivered and over what period of time including the number of sessions, their schedule, and their duration, intensity or dose

8a. . . . a loading dose of 1 g of tranexamic acid infused over 10 min, followed by an intravenous infusion of 1 g over 8 h

8b. They received five text messages a day for the first five weeks and then three a week for the next 26 weeks

8c. . . . . exercise three times a week for 24 weeks. . . Participants began with 15 minutes of exercise and increased to 40 minutes by week eight . . . Between weeks eight and 24, attempts to increase exercise intensity were made at least weekly either by increasing treadmill speed or by increasing the treadmill grade. Participants with leg symptoms were encouraged to exercise to near maximal leg symptoms. Asymptomatic participants were encouraged to exercise to a level of 12 to 14 . . . . on the Borg rating of perceived exertion scale [reference]

8d. . . . delivered weekly one hour sessions in the woman’s home, for up to eight weeks . . . starting at around eight weeks postnatally

Explanation —The type of information needed about the “when and how much” of the intervention will differ according to the type of intervention. For some interventions some aspects will be more important than others. For example, for pharmacological interventions, the dose and scheduling is often important (example 8a); for many non-pharmacological interventions, the “how much” of the intervention is instead described by the duration and number of sessions (examples 8b, 8c). For multiple session interventions, the schedule of the sessions is also needed (example 8b) and if the number of sessions, their schedule, and/or intensity was fixed (examples 8b, 4c, 6a) or if it could be varied according to rules and if so, what they were (example 8c). Tailoring of the intervention to individuals or groups of individuals is elaborated on in item 9 (tailoring). For some interventions, as part of the “when” information, detail about the timing of the intervention in relation to relevant events might also be important (for example, how long after diagnosis, first symptoms, or a crucial event did the intervention start) (example 8d). As described below in item 12, the “amount” or dose of intervention that participants actually received might differ from the amount intended. This detail should be described, usually in the results section (examples 12a-c).

Item 9. Tailoring: If the intervention was planned to be personalised, titrated or adapted, then describe what, why, when, and how

9a. Those allocated to the intervention arm followed an intensive stepped programme of management, with mandatory visits to their doctor at weeks 6, 10, 14, and 18 after randomisation to review their blood pressure and to adjust their treatment if needed according to prespecified algorithms [provided in supplementary appendix]

9b. All patients received laparoscopic mini-gastric bypass surgery. . . The bypass limb was adjusted according to the preoperative BMI of the patient. A 150 cm limb was used for BMI 35, with a 10 cm increase in the bypass limb with every BMI category increase, instead of using a fixed limb for all patients

9c. Participants began exercising at 50% of their 1 rm [repetition maximum]. Weights were increased over the first five weeks until participants were lifting 80% of their 1 rm. Weights were adjusted after each monthly 1 rm and as needed to achieve an exercise intensity of a rating of perceived exertion of 12 to 14

9d. Stepped-care decisions for patients . . . were guided by responses to the nine item patient health questionnaire [reference], administered at each treatment visit and formally evaluated at eight week intervals. Patients who did not show prespecified improvement were offered the choice of switching treatments (for example, from problem solving therapy to medication), adding the other treatment, or intensifying the original treatment choice, based on the treatment team’s recommendation (for details, see [reference])

Explanation —In tailored interventions, not all participants receive an identical intervention. Interventions can be tailored for several reasons, such as titration to obtain an appropriate “dose” (example 9a); participant’s preference, skills, or situation (example 9b); or it may be an intrinsic element of the intervention as with increasing intensity of an exercise (example 9c). Hence, a brief rationale and guide for tailoring should be provided, including any variables/constructs used for participant assessment (examples 9b, 9c) and subsequent tailoring. Tailoring can occur at several stages and authors should describe any decision points and rules used at each point (example 9d). If any decisional or instructional materials are used, such as flowcharts, algorithms or dosing nomograms, these should be included, referenced (example 9d), or their location provided (example 9a).

Item 10. Modifications: If the intervention was modified during the course of the study, describe the changes (what, why, when, and how)

10a. A mixture of general practitioners and practice care nurses delivered 95% of screening and brief intervention activity in this trial. . . Owing to this slow recruitment, research staff who had delivered training in study procedures supported screening and brief intervention delivery in 10 practices and recruited 152 patients, which was 5% of the total number of trial participants

10b. Computers with slow processing units and poor internet connections meant that seven general practitioners never got functional software; they used a structured paper version that was faxed between the research team and general practitioner after each appointment

Explanation — This item refers to modifications that occur at the study level, not individual tailoring as described in item 9. Unforeseen modifications to the intervention can occur during the course of the study, particularly in early studies. If this happens, it is important to explain what was modified, why and when modifications occurred, and how the modified intervention differed from the original (example 10a—modification to who provided the intervention; example 10b— modification in the materials). Modifications sometimes reflect changing circumstances. In other studies, they can show learning about the intervention, which is important to transmit to the reader and others to prevent unnecessary repetition of errors during attempts to replicate the intervention. If changes to the intervention occurred between the published protocol or published pilot study and the primary paper, these changes should also be described.

Item 11. How well (planned): If intervention adherence or fidelity was assessed, describe how and by whom, and if any strategies were used to maintain or improve fidelity, describe them

11a. Pathologists were trained to identify lateral spread of tumour according to the protocol [reference]. The results of histopathological examination of the specimens were reviewed by a panel of supervising pathologists and a quality manager

11b. Staff in the study sites were trained initially, and therapy supervision was provided by weekly meetings between therapists and investigators. Cognitive therapy sessions were taped with the participant’s consent so that participants could be asked to listen to the tapes as part of their homework and to assist supervision. During the course of the trial a sample of 80 tapes was rated according to the cognitive therapy scale-revised [reference] and the cognitive therapy for at risk populations adherence scale [reference] to ensure rigorous adherence to the protocol throughout the duration of the trial. These tapes were drawn from both early and late phases of therapy and included participants from each year of recruitment

11c. Adherence to trial medication was assessed by means of self reported pill counts collected during follow-up telephone calls. These data were categorised as no pills taken, hardly any taken (1-24% of prescribed doses), some taken (25-49%), most taken (50-74%), or all taken (75-100%)

11d. Training will be delivered independently in each of the three regional study centres. All trainers will adhere to a single training protocol to ensure standardised delivery of the training across centres. Training delivery will be planned and rehearsed jointly by all trainers using role play and peer review techniques. In addition, the project manager will act as an observer during the first two training sessions in each centre and will provide feedback to trainers with a view to further standardising the training [note, this example is from a protocol]

Explanation —Fidelity refers to the degree to which an intervention happened in the way the investigators intended it to 20 and can affect the success of an intervention. 21 The terms used to describe this concept vary among disciplines and include treatment integrity, provider or participant adherence, and implementation fidelity. This item—and item 12—extends beyond simple receipt of the intervention (such as how many participants were issued with the intervention drug or exercises) and refers to “how well” the intervention was received or delivered (such as how many participants took the drug/did the exercises, how much they took/did, and for how long). Depending on the intervention, fidelity can apply to one or more parts of the intervention, such as training of providers (examples 11a, 11b, 11d), delivery of the intervention (example 11b), and receipt of the intervention (example 11c). The types of measures used to determine intervention fidelity will also vary according to the type of intervention. For example, in simple pharmacological interventions, assessing fidelity often focuses on recipients’ adherence to taking the drug (example 11b). In complex interventions, such as rehabilitation, psychological, or behaviour change interventions, however, assessment of fidelity is also more complex (example 11b). There are various preplanned strategies and tools that can be used to maintain fidelity before delivery of the intervention (example 11d) or during the study (example 11b). If any strategies or tools were used to maintain fidelity, they should be clearly described. Any materials used as part of assessing or maintaining fidelity should be included, referenced, or their location provided.

Item 12: How well (actual): If intervention adherence or fidelity was assessed, describe the extent to which the intervention was delivered as planned

12a. The mean (SD) number of physiotherapy sessions attended was 7.5 (1.9). Seven patients (9%) completed less than four physiotherapy sessions; the reasons included non-attendance, moving interstate, or recovery from pain. Of patients in the physiotherapy groups, 70% were compliant with their home exercise program during at least five of seven weeks

12b. The EE [early exercise] group reported an adherence rate of 73% at [time] T2 and 75.7% at [time] T3, and the CE [delayed exercise] group reported 86.7% adherence at T3 . . . with the early exercise EE group reporting disease and treatment related barriers to exercise during their cancer treatment (“week of chemotherapy” 14%; “fatigue” 10%); or life related barriers (“illness eg, colds or flu” 16%; “family obligations” 13%)”

12c. A total of 214 participants (78%) reported taking at least 75% of the study tablets; the proportion of patients who reported taking at least 75% of the tablets was similar in the two groups

12d. The integrity of the psychological therapy was assessed with the cognitive therapy rating scale [reference] to score transcripts of 40 online sessions for patients who had completed at least five sessions of therapy. With use of computer generated random numbers, at least one such patient was selected for each therapist. For these patients, either session six or the penultimate session was rated by two independent CBT [cognitive behaviour therapy]-trained psychologists, who gave mean ratings of 31 (SD between therapists 9) and 32 (13) of 72

Explanation — For various reasons, an intervention, or parts of it, might not be delivered as intended, thus affecting the fidelity of the intervention. If this is assessed, authors should describe the extent to which the delivered intervention varied from the intended intervention. This information can help to explain study findings, minimise errors in interpreting study outcomes, inform future modifications to the intervention, and, when fidelity is poor, can point to the need for further studies or strategies to improve fidelity or adherence. 22 23 For example, there might be some aspects of the intervention that participants do not like and this could influence their adherence. The way in which the intervention fidelity is reported will reflect the measures used to assess it (examples 12a-d), as described in item 11.

Who should use TIDieR?

We describe a short list of items that we believe can be used to improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information. Consistent with the CONSORT 2010 and SPIRIT 2013 statements, we recommend that interventions are described in enough detail to enable replication, and recommend that authors use the TIDieR checklist to achieve this. As inclusion of all intervention details is not always possible in the primary paper of a study, the TIDieR checklist encourages authors to indicate that they have reported each of the items and to state where this information is located (see appendix 3).

The number of checklist items reported is improved when journals require checklist completion as part of the submission process. 24 We encourage journals to endorse the use of the TIDieR checklist, in a similar way to CONSORT and related statements. This can be done by modifying their author instructions, publishing an editorial about intervention reporting, and including a link to the checklist on their website. Few journals currently provide specific guidance about how to report interventions. 25 A small number have editorial policies stating that they will not publish trials unless intervention protocols or full details are available. 26 We encourage other journals to consider adopting similar policies. Any links provided by journals and authors should be reliable and enduring. Stable depositories for descriptions of interventions are also required, and their development needs the contribution and collaboration of all stakeholders in the research community (such as researchers, journal editors, publishers, research funding bodies).

Authors might also want to be guided by the TIDieR items when describing interventions in systematic reviews so that readers of reviews have access to full details of any intervention (or at least details about where to obtain further information) that they want to replicate after reading the review.

Using TIDieR in conjunction with the CONSORT and SPIRIT Statements

For authors submitting reports of randomised trials, we suggest using the TIDieR in conjunction with the CONSORT checklist: when authors complete item 5 of the CONSORT checklist, they should insert “refer to TIDieR checklist” and provide a separate completed TIDieR checklist. For journals that adopt this recommendation, their instructions to authors will need to be modified accordingly and their editors and reviewers made aware of the change. Similarly, for authors submitting protocols of trials, the TIDieR checklist can be referred to when dealing with item 11 of the SPIRIT 2013 checklist. One point of difference is that two TIDieR items (items 10 and 12) are not applicable to intervention reporting in protocols because they cannot be completed until the study is complete. This is noted on the TIDieR checklist. Published protocols are likely to grow in importance as a source of information about the intervention and use of TIDieR in conjunction with the SPIRIT 2013 statement can facilitate this. For authors of study designs other than randomised trials, TIDieR can be used alone as a standalone checklist or in conjunction with the relevant statement for that study design (such as the STROBE statement 12 ). We acknowledge that describing complex interventions well can be challenging and that for some particularly complex interventions, a checklist, such as TIDieR, could go some way towards assisting with intervention reporting but might not be able to capture the full complexity of these interventions.

We recognise that adhering to the TIDieR checklist might increase the word count of a paper, particular if the study protocol is not publicly available. We believe this might be necessary to help improve the reporting of studies generally and interventions specifically. As journals recognise the importance of well reported studies and fully described methods, and many move to a model of online only, or a hybrid of printed and online with posting of the full study protocol, this might become less of a barrier to quality reporting. For example, the Nature Publishing Group recently removed word limits on the methods section of submitted papers and advises that: “If more space is required to describe the methods completely, the author should include the 300-word section ‘Methods Summary’ and provide an additional ‘Methods’ section at the end of the text, following the figure legends. This Methods section will appear in the online . . . version of the paper, but will not appear in the printed issue. The Methods section should be written as concisely as possible but should contain all elements necessary to allow interpretation and replication of the results.” 27

The TIDieR checklist and guide should assist authors, editors, peer reviewers, and readers. Some authors might perceive this checklist as another time consuming hurdle and elect to seek publication in a journal that does not endorse reporting guidelines. There is a large evidence base indicating that the quality of reporting of health research is unacceptably poor. Properly endorsed and implemented reporting guidelines offer a way for publishers, editors, peer reviewers, and authors to do a better job of completely and transparently describing what was done and found. 28 Doing so will help reduce wasteful research 29 30 and increase the potential impact of research on health.

Summary points

Without a complete published description of interventions, clinicians and patients cannot reliably implement effective interventions

The quality of description of interventions in publications, regardless of type of intervention, is remarkably poor

The Template for Intervention Description and Replication (TIDieR) checklist and guide has been developed to improve the completeness of reporting, and ultimately the replicability, of interventions

TIDieR can be used by authors to structure reports of their interventions, by reviewers and editors to assess completeness of descriptions, and by readers who want to use the information

Cite this as: BMJ 2014;348:g1687

We are grateful to everyone who responded to the Delphi survey and for their thoughtful comments. We also thank Nicola Pidduck (Department of Primary Care Health Sciences, Oxford University) for her assistance in organising the consensus meeting in Oxford.

Contributors: PPG and TCH initiated the TIDieR group and led the organising of the Delphi survey and consensus meeting, in conjunction with the other members of the steering group (IB, RM, and RP). TCH led the writing of the paper. All authors contributed to the drafting and revision of the paper and approved the final version. TCH and PPG are guarantors.

Funding: There was no explicit funding for the development of this checklist and guide. The consensus meeting in March 2013 was partially funded by a NIHR Senior Investigator Award held by PPG. TCH is supported by a National Health and Medical Research Council of Australia (NHMRC)/Primary Health Care Research Evaluation and Development Career Development Fellowship (1033038) with funding provided by the Australian Department of Health and Ageing. PPG is supported by a NHMRC Australia Fellowship (527500). DGA is supported by a programme grant from Cancer Research UK (C5529). MDW is supported by a Wellcome Trust Senior Investigator award (WT097899MA).

Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: RM is employed by NETSCC, part of the National Institute for Health Research (NIHR) in England. NETSCC manages on behalf of NIHR the NIHR Journals Library, “a suite of five open access journals providing an important and permanent archive of research funded by the National Institute for Health Research.” The NIHR Journals Library places great value on reporting the full results of funded research and so is likely to be a user of TIDieR, as it is of other reporting guidelines. VB was the Chief Editor of PLOS Medicine at the time of the consensus meeting and initial drafting of this paper. HM is an assistant editor at BMJ but was not involved in any decision making regarding this paper.

Provenance and peer review: Not commissioned; externally peer reviewed.

  • ↵ Duff J, Leather H, Walden E, LaPlant K, George T. Adequacy of published oncology randomised controlled trials to provide therapeutic details needed for clinical application. J Natl Cancer Inst 2010 ; 102 : 702 -5. OpenUrl Abstract / FREE Full Text
  • ↵ Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008 ; 336 : 1472 -4. OpenUrl FREE Full Text
  • ↵ Hoffmann T, Erueti C, Glasziou P. Poor description of non-pharmacological interventions: analysis of consecutive sample of randomised trials. BMJ 2013 ; 347 : f3755 . OpenUrl Abstract / FREE Full Text
  • ↵ Schulz K, Altman D, Moher D, CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010 ; 340 : c332 . OpenUrl FREE Full Text
  • ↵ Schroter S, Glasziou P, Heneghan C. Quality of descriptions of treatments: a review of published randomised controlled trials. BMJ Open 2012 ; 2 : e001978 . OpenUrl Abstract / FREE Full Text
  • ↵ Boutron I, Moher D, Altman D, Schulz K, Ravaud P. Extending the CONSORT statement to randomised trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med 2008 ; 148 : 295 -310. OpenUrl CrossRef PubMed Web of Science
  • ↵ MacPherson H, Altman DG, Hammerschlag R, Youping L, Taixiang W, White A, et al. Revised standards for reporting interventions in clinical trials of acupuncture (STRICTA): extending the CONSORT statement. PLoS Med 2010 ; 7 : e1000261 . OpenUrl CrossRef PubMed
  • ↵ Gagnier J, Boon H, Rochon P, Moher D, Barnes J, Bombardier C, et al. Reporting randomised, controlled trials of herbal interventions: an elaborated CONSORT statement. Ann Intern Med 2006 ; 144 : 364 -7. OpenUrl CrossRef PubMed Web of Science
  • ↵ Chan A, Tetzlaff J, Gøtzsche P, Altman D, Mann H, Berlin J, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 2013 ; 346 : e7586 . OpenUrl Abstract / FREE Full Text
  • ↵ Moher D, Schulz K, Simera I, Altman D. Guidance for developers of health research reporting guidelines. PLoS Med 2010 ; 7 : e1000217 . OpenUrl CrossRef PubMed
  • ↵ Murphy M, Black N, Lamping D, McKee C, Sanderson C, Askham J, et al. Consensus development methods, and their use in clinical guideline development. Health Technol Assess 1998 ; 2 : 1 -88. OpenUrl PubMed
  • ↵ Von Elm E, Altman D, Egger M, Pocock S, Gøtzsche P, Vandenbroucke J, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 2007 ; 335 : 806 -8. OpenUrl FREE Full Text
  • ↵ De Bruin M, Viechtbauer W, Hospers H, Schaalma H, Kok G. Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: implications for the interpretation and comparison of intervention effects. Health Psychol 2009 ; 28 : 668 -74. OpenUrl CrossRef PubMed Web of Science
  • ↵ Thorpe K, Zwarenstein M, Oxman AD, Treweek S, Furberg C, Altman D, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol 2009 ; 62 : 464 -75. OpenUrl CrossRef PubMed Web of Science
  • ↵ Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008 ; 337 : a1655 . OpenUrl FREE Full Text
  • ↵ McCleary N, Duncan E, Stewart F, Francis J. Active ingredients are reported more often for pharmacologic than non-pharmacologic interventions: an illustrative review of reporting practices in titles and abstracts. Trials 2013 ; 14 : 146 . OpenUrl CrossRef PubMed
  • ↵ Michie S, West R. Behaviour change theory and evidence: a presentation to Government. Health Psychol Rev 2013 ; 7 : 1 -22. OpenUrl CrossRef Web of Science
  • ↵ Dixon-Woods M, Leslie M, Tarrant C, Bion J. Explaining Matching Michigan: an ethnographic study of a patient safety program. Implement Sci 2013 ; 8 : 70 . OpenUrl CrossRef PubMed
  • ↵ Dixon-Woods M, Bosk C, Aveling E, Goeschel C, Pronovost P. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q 2011 ; 89 : 167 -205. OpenUrl CrossRef PubMed Web of Science
  • ↵ Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci 2007 ; 2 : 40 . OpenUrl CrossRef PubMed
  • ↵ Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci D, Ory M, et al. Enhancing treatment fidelity in health behaviour change studies: best practices and recommendations from the NIH Behaviour Change Consortium. Health Psychol 2004 ; 23 : 443 -51. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hardeman W, Michie S, Fanshawe T, Prevost T, McLoughlin K, Kinmonth AL. Fidelity of delivery of a physical activity intervention: predictors and consequences. Psychol Health 2008 ; 23 : 11 -24. OpenUrl CrossRef Web of Science
  • ↵ Spillane V, Byrne M, Byrne M, Leathem C, O’Malley M, Cupples M. Monitoring treatment fidelity in a randomised controlled trial of a complex intervention. J Adv Nursing 2007 ; 60 : 343 -52. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors’ implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012 ; 344 : e4178 . OpenUrl Abstract / FREE Full Text
  • ↵ Hoffmann T, English T, Glasziou P. Reporting of interventions in randomised trials: an audit of journal Instructions to Authors. Trials 2014 ; 15 : 20 . OpenUrl CrossRef PubMed
  • ↵ Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci 2009 ; 4 : 40 . OpenUrl CrossRef PubMed
  • ↵ Nature. For authors: manuscript formatting guide [Internet]. www.nature.com/nature/authors/gta/index.html#a5.3 .
  • ↵ Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev 2012 ; 1 : 60 . OpenUrl CrossRef PubMed
  • ↵ Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009 ; 374 : 86 -9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Glasziou P, Altman D, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014 ; 383 : 267 -76. OpenUrl CrossRef PubMed Web of Science

intervention plan in research

  • Multi-Tiered System of Supports Build effective, district-wide MTSS
  • School Climate & Culture Create a safe, supportive learning environment
  • Positive Behavior Interventions & Supports Promote positive behavior and climate
  • Family Engagement Engage families as partners in education
  • Platform Holistic data and student support tools
  • Integrations Daily syncs with district data systems and assessments
  • Professional Development Strategic advising, workshop facilitation, and ongoing support

Panorama Pathways icon

  • Success Stories
  • Surveys and Toolkits
  • Product Demos
  • Events and Conferences

 alt=

AIM FOR IMPACT

Join us to hear from AI visionaries and education leaders on building future-ready schools.

  • Connecticut
  • Massachusetts
  • Mississippi
  • New Hampshire
  • North Carolina
  • North Dakota
  • Pennsylvania
  • Rhode Island
  • South Carolina
  • South Dakota
  • West Virginia
  • Testimonials
  • About Panorama
  • Data Privacy
  • Leadership Team
  • In the Press
  • Request a Demo

Request a Demo

  • Popular Posts
  • Multi-Tiered System of Supports
  • Family Engagement
  • Social-Emotional Well-Being
  • College and Career Readiness

Show Categories

How to Write an Intervention Plan [+ Template]

Jenna Buckle

Jenna Buckle

How to Write an Intervention Plan [+ Template]

Implementing a multi-tiered system of supports (MTSS) without an intervention planning process is like trying to teach a class without a lesson plan. If you don't know where you're going (or have a plan for getting there), you won't be able to effectively support students.

Intervention plans are typically used as part of student support team processes for MTSS, RTI (response to intervention), or PBIS (positive behavioral interventions and supports). Once a caring adult determines that a student needs targeted support, the next step is to create an intervention plan.

In this post, we'll cover how to write an intervention plan and share a helpful template for getting started.

Table of Contents

What Is an Intervention Plan?

How to write an intervention plan.

  • Identify the Student(s)
  • Choose an Intervention Type and Tier
  • Create a Goal for the Student's Intervention Program
  • Select an Intervention Strategy
  • Assign an Adult Champion
  • Set a Timeline
  • Establish a Method for Progress Monitoring

Put This Into Practice

Download Now: Panorama's Interventions and Progress Monitoring Toolkit for 2023-24

An intervention plan is a blueprint for helping a student build specific skills or reach a goal. In other words, it's an action plan. 

In general, intervention plans include a goal, intervention strategy , timeline, and progress monitoring method.

What Makes a Good Intervention Plan?

Before you get started building an intervention plan, make sure you have the necessary data! Look at the student's progress across multiple dimensions—academics, social-emotional learning, behavior, and attendance. This can help you make more informed decisions about what the student needs.

Here's a scenario to demonstrate this point:

In addition to being data informed, good intervention plans are measurable and time-bound . You'll want a clear way to measure if the student is progressing, and a plan for how long you'll deliver the intervention.

The goal is to reach a decision point at the end of an intervention plan. Maybe the student has met their goal, and you can close out their intervention plan. Maybe the student is progressing, but the intervention should continue. Or, maybe the current intervention plan isn't working and it's time to rethink the strategies in place.

Once you've determined that a student can benefit from targeted support, it's time to create an intervention plan. This plan will be your blueprint for helping the student build specific skills or reach a goal. You can download the intervention plan template below to follow step-by-step instructions to writing an intervention plan. 

Screenshot of Panorama's intervention plan template

Pro tip for Panorama Users: Panorama Student Success simplifies the process of creating intervention plans. Click on “Create Plan” on a student’s profile page to build a plan for improving the student’s academic performance, behavior, attendance, and/or SEL. You can even generate a secure, temporary link for families to view students’ intervention plans and their progress.

1. Identify the student(s)

Which student will you be supporting? First, record the student's name at the top of the plan. You might also include additional information such as grade level, gender, or other demographic attributes or identifiers used by your school. 

(Keep in mind that you can also create an intervention plan for a small group of students that you're working with. The steps to create a group plan are the same.)

2. Choose an intervention type and tier 

What is the area of focus for the intervention? What subject (or domain) can the student benefit from extra support in? Examples could be English language arts (ELA) , math, behavior, social-emotional learning (SEL), or attendance .

Next, specify Tier 2 or Tier 3 depending on the intensity of the intervention. Here is a refresher on the MTSS pyramid:

MTSS Pyramid (1)

  • Tier 3 includes more intensive interventions for students whose needs are not addressed at Tiers 1 or 2.
  • Tier 2 consists of individualized interventions for students in need of additional support.
  • Tier 1 is the foundation and includes universal supports for all students.

3. Create a goal for the student's intervention program

This is when you'll identify specific skills to be developed, or the goal you are looking to help the student achieve. 

Remember to frame these in the positive (an opportunity to grow) rather than the negative (a problem to solve).  

It can be helpful to use the SMART goal framework —setting a goal that is specific, measurable, attainable, relevant, and timely. 

For example, to build a student's self-efficacy in math, you might set the following goal: “Charles will be able to complete 80% of his do-now activities at the beginning of each math lesson with the support of manipulatives.”

4. Select an intervention strategy

With the intervention goal in mind, identify a strategy or activity that could help this student reach the goal. Sample intervention strategies include 2x10 relationship building , a behavior management plan such as behavior-specific praise , graphic organizers, a lunch bunch , WOOP goal-setting , and math time drills.

Your school district may already have an evidence-based intervention menu to pick from. For example, if your district partners with Panorama, you have access to our Playbook, with over 700 evidence and research-based interventions . In fact, t he Panorama platform recommends interventions from Playbook whenever you create an intervention plan. If you don't have an existing intervention menu, here are a few resources to get help you get started building your own library:

  • How to Build a Tiered Intervention Menu
  • 5 PBIS Interventions for Tier 1 to Use in Your District Today
  • 42 MTSS Intervention Strategies to Bring Back to Your Support Team
  • 6 Effective Interventions for Social-Emotional Learning
  • 18 Research-Based Interventions for Your MTSS 
  • 20 Evidence-Based Interventions for High School Students 

5. Assign an adult champion

Who will carry out the intervention plan with fidelity? A teacher? Interventionist? School counselor? 

Clear ownership is key. Whether it's one adult or a team, make sure to document who will be responsible for delivering the intervention(s), logging notes, and monitoring student progress.

6. Set a timeline

Next, set a clear prescription for how often and how long an intervention will take place. Record a start date (when the intervention is set to begin) and a duration (the expected length of the intervention cycle). We recommend five to six weeks at a minimum so the intervention has a chance to take hold.

7. Establish a method for progress monitoring

You're almost done! The last step in building a great intervention plan is deciding on a data collection strategy. 

Once the intervention plan is underway, it's important to collect and record qualitative and/or quantitative data at regular intervals. Many goals are best tracked quantitatively, such as reading level growth or computational fluency. Other goals (behavioral and SEL goals, for example) might be best tracked qualitatively—like making note of how a student is interacting with peers in class. (Learn more about the fundamentals of progress monitoring for MTSS/RTI.)

Don't forget to include the following information on your intervention plan:

  • Monitoring Frequency: How often you'll update the student’s progress over the course of the intervention cycle. For example, this could be weekly, bi-weekly, or monthly.
  • Monitoring Method: The assessment you'll use to track the student’s progress. Indicate a baseline (the student’s most recent assessment score) and target (desired assessment score). Alternatively, you might plan to track progress through observational notes.

mtss - as a collaboration and progress monitoring tool

Example of a reading intervention plan in Panorama Student Success (mock data pictured)

If you use Panorama for MTSS : When creating an intervention plan, you'll see recommended interventions based on the goals of the plan. Then, log qualitative and quantitative notes to monitor a student's progress over time. The notes are saved to the student profile so other educators in your school can stay up-to-date on the student's progress.

Put This Into Practice!

Now that you have the building blocks for writing an effective intervention plan, there's only one thing left to do: put it into action. If you're an MTSS leader or coordinator for your district, we hope that you'll share this process (and template!) with your building-level student support teams. If you are an educator working with a specific student, we hope that this process helps you stay organized as you deliver supports.

Access intervention planning resources in our free Interventions and Progress Monitoring Toolkit

Related Articles

42 MTSS Intervention Strategies for Your Student Support Team

42 MTSS Intervention Strategies for Your Student Support Team

Explore effective MTSS intervention strategies for supporting students across academics, behavior, and social emotional learning.

Bring Your MTSS & RTI Strategies to Life: Introducing Panorama Intervention Management

Bring Your MTSS & RTI Strategies to Life: Introducing Panorama Intervention Management

Announcing Panorama Intervention Management for MTSS and RTI, a new set of tools designed to streamline the MTSS process so educators can support the whole student and understand which interventions are working.

What Is RTI? Guide to RTI in Schools

What Is RTI? Guide to RTI in Schools

RTI, or Response to Intervention, is an educational framework that uses a tiered model to identify and support students with research-based interventions and high-quality instruction. Learn more about RTI in this guide.

intervention plan in research

Featured Resource

Interventions and progress monitoring toolkit for 2023-24.

Access free templates to build your district or school's MTSS/RTI process. Includes templates for intervention planning, progress monitoring, and a sample intervention menu.

Join 90,000+ education leaders on our weekly newsletter.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001.

Cover of Health and Behavior

Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences.

  • Hardcopy Version at National Academies Press

7 Evaluating and Disseminating Intervention Research

Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.

The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.

Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.

The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients.

During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior ( Greenwald, 1984 ; Greenwald and Cullen, 1984 ; NHLBI, 1983 ): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions.

  • EVALUATING INTERVENTIONS

Assessing Outcomes

Choice of outcome measures.

The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease ( Frick et al., 1987 ; Lipid Research Clinics Program, 1984 ). Those trials usually were not associated with reductions in death from all causes ( Golomb, 1998 ; Muldoon et al, 1990 ). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death ( Bigger, 1984 ). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous ( Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989 ).

Many behavioral intervention trials document the capacity of interventions to modify risk factors ( NHLBI, 1998 ), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs.

Outcome Measurement

One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy ( Gold et al., 1996 ; Kaplan and Anderson, 1996 ) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient's health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory ( Torrence, 1976 ). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory ( Bennett and Torrence, 1996 ). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk ( Tversky and Fox, 1995 ). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum ( Zhu and Anderson, 1991 ). Some evidence supports the use of simple rating scales ( Anderson and Zalinski, 1990 ). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making ( Tversky and Shafir, 1992 ). A significant body of studies demonstrates that different methods for estimating preferences will produce different values ( Lenert and Kaplan, 2000 ). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states.

The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists ( Kaplan, 1994 ). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups ( Kaplan, 1998 ), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994 , for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis ( Gold et al., 1996 ).

Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990 ). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented ( Spilker, 1996 ). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient.

Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life ( Schneiderman et al., 1992 ). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine ( Gold et al., 1996 ).

Evaluating Clinical Interventions

Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below.

Evidence-Based Medicine

Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive ( Sackett et al., 1997 ). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1 . Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) ( Friedman et al., 1985 ). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight.

TABLE 7-1. Research Evidence Hierarchy.

Research Evidence Hierarchy.

There has been concern about the generalizability of RCTs ( Feinstien and Horwitz, 1997 ; Horwitz, 1987a , b ; Horwitz and Daniels, 1996 ; Horwitz et al., 1996 , 1990 ; Rabeneck et al., 1992 ), specifically because the recruitment of participants can result in samples that are not representative of the population ( Seligman, 1996 ). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings.

Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness ( Heaney and Goetzel, 1997 ; Mosteller and Colditz, 1996 ). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa ( Compas et al., 1998 ).

Efficacy and Effectiveness

Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the effects of a clinical intervention ( Chambless and Hollon, 1998 ) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial ( Matt and Navarro, 1997 ), others suggest that interventions are less effective in clinical settings than in the laboratory ( Weisz et al., 1992 ), and others find particular interventions equally effective in experimental and clinical settings ( Shadish et al., 1997 ).

The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments ( Chambless and Hollon, 1998 ). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders ( Compas et al., 1998 ).

An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.

Evaluating Community-Level Interventions

Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research ( Flay, 1986 ; Green et al., 1995 ; Greenwald and Cullen, 1984 ). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community ( McKinlay, 1993 ; Susser, 1995 ). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations.

Review criteria of community interventions have been suggested by Hancock and colleagues ( Hancock et al., 1997 ). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication.

Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power ( Koepsell et al., 1995 ; Murray, 1995 ). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities ( Israel et al., 1995 ; Klitzner, 1993 ; McKinlay, 1993 ; Susser, 1995 ). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention ( Fisher, 1995 ; McKinlay, 1993 ). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher's ability to control, it might be impossible to identify and maintain a commensurate comparison community ( Green et al., 1996 ; Hollister and Hill, 1995 ; Israel et al., 1995 ; Klitzner, 1993 ; Mittelmark et al., 1993 ; Susser, 1995 ). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” ( Green et al., 1996 , p. 274).

Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work ( Koepsell, 1998 ; Koepsell et al., 1992 ). This is the contribution of the Theory of Change model for communities described in Chapter 6 . A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work ( Bickman, 1987 ; Donaldson et al., 1994 ; Lipsey, 1993 ; Lipsey and Polard, 1989 ). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory ( Donaldson et al., 1994 ).

Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change ( Fisher, 1995 ; Susser, 1995 ; Winkleby, 1994 ). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research ( Israel et al., 1998 ).

Community-Based Participatory Research and Evaluation

As reviewed in Chapter 4 , broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health ( Brown, 1991 ; Gottlieb and McLeroy, 1994 ; Krieger, 1994 ; Krieger et al., 1993 ; Lalonde, 1974 ; Lantz et al., 1998 ; McKinlay, 1993 ; Sorensen et al., 1998a , b ; Stokols, 1992 , 1996 ; Susser and Susser, 1996a , b ; Williams and Collins, 1995 ; World Health Organization [WHO], 1986 ). It also points to the importance of expanding the evaluation of interventions to incorporate such factors ( Fisher, 1995 ; Green et al., 1995 ; Hatch et al., 1993 ; Israel et al., 1995 ; James, 1993 ; Pearce, 1996 ; Sorensen et al., 1998a , b ; Steckler et al., 1992 ; Susser, 1995 ).

This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others ( Israel et al., 1998 ). The partners contribute “unique strengths and shared responsibilities” ( Green et al., 1995 , p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members ( Dressler, 1993 ; Eng and Blanchard, 1990–1 ; Hatch et al., 1993 ; Israel et al., 1998 ; Schulz et al., 1998a ). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” ( Hatch et al., 1993 , p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 ( Israel et al., 1998 ). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved.

TABLE 7-2. Rationale for Community-Based Participatory Research.

Rationale for Community-Based Participatory Research.

Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation ( Israel et al., 1998 ). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process ( Cornwall, 1996 ; Dockery, 1996 ; Green et al., 1995 ). Although the principles are presented here as distinct items, community-based participatory research integrates them.

TABLE 7-3. Principles of Community-Based Participatory Research and Evaluation.

Principles of Community-Based Participatory Research and Evaluation.

There are four major foci of evaluation with implications for research design: context, process, impact, and outcome ( Israel, 1994 ; Israel et al., 1995 ; Simons-Morton et al., 1995 ). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature ( Campbell and Stanley, 1963 ; Cook and Reichardt, 1979 ; Dignan, 1989 ; Green, 1977 ; Green and Gordon, 1982 ; Green and Lewis, 1986 ; Guba and Lincoln, 1989 ; House, 1980 ; Israel et al., 1995 ; Patton, 1987 , 1990 ; Rossi and Freeman, 1989 ; Shadish et al., 1991 ; Stone et al., 1994 ; Thomas and Morgan, 1991 ; Windsor et al., 1994 ; Yin, 1993 ).

Context encompasses the events, influences, and changes that occur naturally in the project setting or environment during the intervention that might affect the outcomes ( Israel et al., 1995 ). Context data provide information about how particular settings facilitate or impede program success. Decisions must be made about which of the many factors in the context of an intervention might have the greatest effect on project success.

Evaluation of process assesses the extent, fidelity, and quality of the implementation of interventions ( McGraw et al., 1994 ). It describes the actual activities of the intervention and the extent of participant exposure, provides quality assurance, describes participants, and identifies the internal dynamics of program operations ( Israel et al., 1995 ).

A distinction is often made in the evaluation of interventions between impact and outcome ( Green and Lewis, 1986 ; Israel et al., 1995 ;

Simons-Morton et al., 1995 ; Windsor et al., 1994 ). Impact evaluation assesses the effectiveness of the intervention in achieving desired changes in targeted mediators. These include the knowledge, attitudes, beliefs, and behavior of participants. Outcome evaluation examines the effects of the intervention on health status, morbidity, and mortality. Impact evaluation focuses on what the intervention is specifically trying to change, and it precedes an outcome evaluation. It is proposed that if the intervention can effect change in some intermediate outcome (“impact”), the “final“ outcome will follow.

Although the association between impact and outcome may not always be substantiated (as discussed earlier in this chapter), impact may be a necessary measure. In some instances, the outcome goals are too far in the future to be evaluated. For example, childhood cardiovascular risk factor intervention studies typically measure intermediate gains in knowledge ( Parcel et al., 1989 ) and changes in diet or physical activity ( Simons-Morton et al., 1991 ). They sometimes assess cholesterol and blood pressure, but they do not usually measure heart disease because that would not be expected to occur for many years.

Given the aims and the dynamic context within which community-based participatory research and evaluation are conducted, methodologic flexibility is essential. Methods must be tailored to the purpose of the research and evaluation and to the context and interests of the community ( Beery and Nelson, 1998 ; deKoning and Martin, 1996 ; Dockery, 1996 ; Dressler, 1993 ; Green et al., 1995 ; Hall, 1992 ; Hatch et al., 1993 ; Israel et al., 1998 ; Marin and Marin, 1991 ; Nyden and Wiewel, 1992 ; Schulz et al., 1998b ; Singer, 1993 ; Stringer, 1996 ). Numerous researchers have suggested greater use of qualitative data, from in-depth interviews and observational studies, for evaluating the context, process, impact, and outcome of community-based participatory research interventions (Fortmann et al., 1995; Goodman, 1999 ; Hugentobler et al., 1992 ; Israel et al., 1995 , 1998 ; Koepsell et al., 1992 ; Mittelmark et al., 1993 ; Parker et al., 1998 ; Sorensen et al., 1998a ; Susser, 1995 ). Triangulation is the use of multiple methods and sources of data to overcome limitations inherent in each method and to improve the accuracy of the information collected, thereby increasing the validity and credibility of the results ( Denzin, 1970 ; Israel et al., 1995 ; Reichardt and Cook, 1980 ; Steckler et al., 1992 ). For examples of the integration of qualitative and quantitative methods in research and evaluation of public-health interventions, see Steckler et al. (1992) and Parker et al. (1998) .

Assessing Government Interventions

Despite the importance of legislation and regulation to promote public health, the effectiveness of government interventions are poorly understood. In particular, policymakers often cannot answer important empirical questions: do legal interventions work and at what economic and social cost? In particular, policymakers need to know whether legal interventions achieve their intended goals (e.g., reducing risk behavior). If so, do legal interventions unintentionally increase other risks (risk/risk tradeoff)? Finally, what are the adverse effects of regulation on personal or economic liberties and general prosperity in society? This is an important question not only because freedom has an intrinsic value in democracy, but also because activities that dampen economic development can have health effects. For example, research demonstrates the positive correlation between socioeconomic status and health ( Chapter 4 ).

Legal interventions often are not subjected to rigorous research evaluation. The research that has been done, moreover, has faced challenges in methodology. There are so many variables that can affect behavior and health status (e.g., differences in informational, physical, social, and cultural environments) that it can be extraordinarily difficult to demonstrate a causal relationship between an intervention and a perceived health effect. Consider the methodologic constraints in identifying the effects of specific drunk-driving laws. Several kinds of laws can be enacted within a short period, so it is difficult to isolate the effect of each law. Publicity about the problem and the legal response can cross state borders, making state comparisons more difficult. Because people who drive under the influence of alcohol also could engage in other risky driving behaviors (e.g., speeding, failing to wear safety belts, running red lights), researchers need to control for changes in other highway safety laws and traffic law enforcement. Subtle differences between comparison communities can have unanticipated effects on the impact of legal interventions ( DeJong and Hingson, 1998 ; Hingson, 1996 ).

Despite such methodologic challenges, social science researchers have studied legal interventions, often with encouraging results. The social science, medical, and behavioral literature contains evaluations of interventions in several public health areas, particularly in relation to injury prevention ( IOM, 1999 ; Rivara et al., 1997a , b ). For example, studies have evaluated the effectiveness of regulations to prevent head injuries (bicycle helmets: Dannenberg et al., 1993 ; Kraus et al., 1994 ; Lund et al., 1991 ; Ni et al., 1997 ; Thompson et al., 1996a , b ), choking and suffocation (refrigerator disposal and warning labels on thin plastic bags: Kraus, 1985 ), child poisoning (childproof packaging: Rogers, 1996 ), and burns (tap water: Erdmann et al., 1991 ). One regulatory measure that has received a great deal of research attention relates to reductions in cigarette-smoking ( Chapter 6 ).

Legal interventions can be an important part of strategies to change behaviors. In considering them, government and other public health agencies face difficult and complex tradeoffs between population health and individual rights (e.g., autonomy, privacy, liberty, property). One example is the controversy over laws that require motorcyclists to wear helmets. Ethical concerns accompany the use of legal interventions to mandate behavior change and must be part of the deliberation process.

  • COST-EFFECTIVENESS EVALUATION

It is not enough to demonstrate that a treatment benefits some patients or community members. The demand for health programs exceeds the resources available to pay for them so that treatments provide clinical benefit and value for money. Investigators, clinicians, and program planners must demonstrate that their interventions constitute a good use of resources.

Well over $ 1 trillion is spent on health care each year in the United States. Current estimates suggest that expenditures on health care exceed $4000 per person ( Health Care Financing Administration, 1998 ). Investments are made in health care to produce good health status for the population, and it is usually assumed that more investment will lead to greater health. Some expenditures in health care produce relatively little benefit; others produce substantial benefits. Cost-effectiveness analysis (CEA) can help guide the use of resources to achieve the greatest improvement in health status for a given expenditure.

Consider the medical interventions in Table 7-4 , all of which are wellknown, generally accepted, and widely used. Some are traditional medical care and some are preventive programs. To emphasize the focus on increasing good health, the table presents the data in units of health bought for $1 million rather than in dollars per unit of health, the usual approach in CEA. The life-year is the most comprehensive unit measure of health. Table 7-4 reveals several important points about resource allocation. There is tremendous variation among the interventions in what can be accomplished for $1 million; which nets 7,750 life-years if used for influenza vaccinations for the elderly, 217 life-years if applied to smoking-cessation programs, but only 2 life-years if used to supply Lovastatin to men aged 35–44 who have high total cholesterol but no heart disease and no other risk factors for heart disease.

TABLE 7-4. Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

How effectively an intervention contributes to good health depends not only on the intervention, but also on the details of its use. Antihypertensive medication is effective, but Propranolol is more cost-effective than Captopril. Thyroid screening is more cost-effective in women than in men. Lovastatin produces more good health when targeted at older high-risk men than at younger low-risk men. Screening for cervical cancer at 3-year intervals with the Pap smear yields 36 life-years per $1 million (compared with no screening), but each $1 million spent to increase the frequency of screening to 2 years brings only 1 additional life-year.

The numbers in Table 7-4 illustrate a central concept in resource allo-cation: opportunity cost. The true cost of choosing to use a particular intervention or to use it in a particular way is not the monetary cost per se, but the health benefits that could have been achieved if the money had been spent on another service instead. Thus, the opportunity cost of providing annual Pap smears ($1 million) rather than smoking-cessation programs is the 217 life-years that could have been achieved through smoking cessation.

The term cost-effectiveness is commonly used but widely misunderstood. Some people confuse cost-effectiveness with cost minimization. Cost minimization aims to reduce health care costs regardless of health outcomes. CEA does not have cost-reduction per se as a goal but is designed to obtain the most improvement in health for a given expenditure. CEA also is often confused with cost/benefit analysis (CBA), which compares investments with returns. CBA ranks the amount of improved health associated with different expenditures with the aim of identifying the appropriate level of investment. CEA indicates which intervention is preferable given a specific expenditure.

Usually, costs are represented by the net or difference between the total costs of the intervention and the total costs of the alternative to that intervention. Typically, the measure of health is the QALY. The net health effect of the intervention is the difference between the QALYs produced by an intervention and the QALYs produced by an alternative or other comparative base.

Comprehensive as it is, CEA does not include everything that might be relevant to a particular decision—so it should never be used mechanically. Decision-makers can have legitimate reasons to emphasize particular groups, benefits, or costs more heavily than others. Furthermore, some decisions require information that cannot be captured easily in a CEA, such as the effect of an intervention on individual privacy or liberty.

CEA is an analytical framework that arises from the question of which ways of promoting good health—procedures, tests, medications, educational programs, regulations, taxes or subsidies, and combinations and variations of these—provide the most effective use of resources. Specific recommendations about behavioral and psychosocial interventions will contribute the most to good health if they are set in this larger context and based on information that demonstrates that they are in the public interest. However, comparing behavioral and psychosocial interventions with other ways of promoting health on the basis of cost-effectiveness requires additional research. Currently there are too few studies that meet this standard to support such recommendations.

  • DISSEMINATION

A basic assumption underlying intervention research is that tested interventions found to be effective are disseminated to and implemented in clinics, communities, schools, and worksites. However, there is a sizable gap between science and practice ( Anderson, 1998 ; Price, 1989 , 1998 ). Researchers and practitioners need to ensure that an intervention is effective, and that the community or organization is prepared to adopt, implement, disseminate, and institutionalize it. There also is a need for demonstration research (phase V) to explain more about the process of dissemination itself.

Dissemination to Consumers

Biomedical research results are commonly reported in the mass media. Nearly every day people are given information about the risks of disease, the benefits of treatment, and the potential health hazards in their environments. They regularly make health decisions on the basis of their understanding of such information. Some evidence shows that lay people often misinterpret health risk information ( Berger and Hendee, 1989 ; Fischhoff, 1999a ) as do their doctors ( Kalet et al., 1994 ; Kong et al., 1986 ). On the question of such a widely publicized issue as mammography, for example, evidence suggests that women overestimate their risk of getting breast cancer by a factor of at least 20 and that they overestimate the benefits of mammography by a factor of 100 ( Black et al., 1995 ). In a study of 500 female veterans ( Schwartz et al., 1997 ), half the women over-estimated their risk of death from breast cancer by a factor of 8. This did not appear to be because the subjects thought that they were more at risk than other women; only 10% reported that they were at higher risk than the average woman of their age. The topic of communication of health messages to the public is discussed at length in an IOM report, Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations ( IOM, 2001 ).

Communicating Risk Information

Improving communication requires understanding what information the public needs. That necessitates both descriptive and normative analyses, which consider what the public believes and what the public should know, respectively. Juxtaposing normative and descriptive analyses might provide guidance for reducing misunderstanding ( Fischhoff and Downs, 1997 ). Formal normative analysis of decisions involves the creation of decision trees, showing the available options and the probabilities of various outcomes of each, whose relative attractiveness (or aversiveness) must be evaluated by people. Although full analyses of decision problems can be quite complex, they often reveal ways to drastically simplify individuals' decision-making problems—in the sense that they reveal a small number of issues of fact or value that really merit serious attention ( Clemen, 1991 ; Merz et al., 1993 ; Raiffa, 1968 ). Those few issues can still pose significant challenges for decision makers. The actual probabilities can differ from people's subjective probabilities (which govern their behavior). For example, a woman who overestimates the value of a mammogram might insist on tests that are of little benefit to her and mistrust the political/ medical system that seeks to deny such care ( Woloshin et al., 2000 ). Obtaining estimates of subjective probabilities is difficult. Although eliciting probabilities has been studied in other contexts over the past two generations ( von Winterfeldt and Edwards, 1986 ; Yates, 1990 ), it has received much less attention in medical contexts, where it can pose questions that people are unwilling or unable to confront ( Fischhoff and Bruine de Bruin, 1999 ).

In addition to such quantitative beliefs, people often need a qualitative understanding of the processes by which risks are created and controlled. This allows them to get an intuitive feeling for the quantitative estimates, to feel competent to make decisions in their own behalf, to monitor their own experience, and to know when they need help ( Fischhoff, 1999b ; Leventhal and Cameron, 1987 ). Not seeing the world in the same way as scientists do also can lead lay people to misinterpret communications directed at them. One common (and some might argue, essential) strategy for evaluating any public health communication or research instrument is to ask people to think aloud as they answer draft versions of questions ( Ericsson and Simon, 1994 ; Schriver, 1989 ). For example, subjects might be asked about the probability of getting HIV from unprotected sexual activity. Reasons for their assessments might be explored as they elaborate on their impressions and the assumptions they use ( Fischhoff, 1999b ; McIntyre and West, 1992 ). The result should both reveal their intuitive theories and improve the communication process.

When people must evaluate their options, the way in which information is framed can have a substantial effect on how it is used ( Kahneman and Tversky, 1983 ; Schwartz, 1999 ; Tversky and Kahneman, 1988 ). The fairest presentation of risk information might be one in which multiple perspectives are used ( Kahneman and Tversky, 1983 , 1996 ). For example, one common situation involves small risks that add up over the course of time, through repeated exposures. The chances of being injured in an automobile crash are very small for any one outing, whether or not the driver wears a seatbelt. However, driving over a lifetime creates a substantial risk—and a substantial benefit for seatbelt use. One way to communicate that perspective is to do the arithmetic explicitly, so that subjects understand it ( Linville et al., 1993 ). Another method that helps people to understand complex information involves presenting ranges rather than best estimates. Science is uncertain, and it should be helpful for people to understand the intervals within which their risks are likely to fall ( Lipkus and Hollands, 1999 ).

Risk communication can be improved. For example, many members of the public have been fearful that proximity to electromagnetic fields and power lines can increase the risk of cancer. Studies revealed that many people knew very little about properties of electricity. In particular, they usually were unaware that exposure decreases as a function of the cube root of distance from the lines. After studying mental models of this risk, Morgan (1995) developed a tiered brochure that presented the problem at a variety of risks. The brochure addressed common misconceptions and explained why scientists disagree about the risks posed by electromagnetic fields. Participants on each side of the debate reviewed the brochure for fairness. Several hundred thousand copies of the brochure have now been distributed. This approach to communication requires that the public listen to experts, but it also requires that the experts listen to the public. Providing information is not enough; it is necessary to take the next step to demonstrate that the information is presented in an unbiased fashion and that the public accurately processes what is offered ( Edworthy and Adams, 1997 ; Hadden, 1986 ; Morgan et al., 2001 ; National Research Council, 1989 ).

The electromagnetic field brochure is an example of a general approach in cognitive psychology, in which communications are designed to create coherent mental models of the domain being considered ( Ericsson and Simon, 1994 ; Fischhoff, 1999b ; Gentner and Stevens, 1983 ; Johnson-Laird, 1980 ). The bases of these communications are formal models of the domain. In the case of the complex processes creating and controlling risks, the appropriate representation is often an influence diagram, a directed graph that captures the uncertain relationships among the factors involved ( Clemen, 1991 ; Morgan et al., 2001 ). Creating such a diagram requires pooling the knowledge of diverse disciplines, rather than letting each tell its own part of the story. Identifying the critical messages requires considering both the science of the risk and recipients' intuitive conceptualizations.

Presentation of Clinical Research Findings

Research results are commonly misinterpreted. When a study shows that the effect of a treatment is statistically significant, it is often assumed that the treatment works for every patient or at least for a high percentage of those treated. In fact, large experimental trials, often with considerable publicity, promote treatments that have only minor effects in most patients. For example, contemporary care for high blood serum cholesterol has been greatly influenced by results of the Coronary Primary Prevention Trial or CPPT Lipid Research Clinics Program, 1984 , in which men were randomly assigned to take a placebo or cholestyramine. Cholestyramine can significantly lower serum cholesterol and, in this trial, reduced it by an average of 8.5%. Men in the treatment group experienced 24% fewer heart attack deaths and 19% fewer heart attacks than did men who took the placebo.

The CPPT showed a 24% reduction in cardiovascular mortality in the treated group. However, the absolute proportions of patients who died of cardiovascular disease were similar in the 2 groups: there were 38 deaths among 1900 participants (2%) in the placebo group and 30 deaths among 1906 participants (1.6%) in the cholestyramine group. In other words, taking the medication for 6 years reduced the chance of dying from cardiovascular disease from 2% to 1.6%.

Because of the difficulties in communicating risk ratio information, the use of simple statistics, such as the number needed to treat (NNT), has been suggested ( Sackett et al., 1997 ). NNT is the number of people that must be treated to avoid one bad outcome. Statistically, NNT is defined as the reciprocal of the absolute-risk reduction. In the cholesterol example, if 2% (0.020) of the patients died in the control arm of an experiment and 1.6% (0.016) died in the experimental arm, the absolute risk reduction is 0.020–0.016=0.004. The reciprocal of 0.004 is 250. In this case, 250 people would have to be treated for 6 years to avoid 1 death from coronary heart disease. Treatments can harm as well as benefit, so in addition to calculating the NNT, it is valuable to calculate the number needed to harm (NNH). This is the number of people a clinician would need to treat to produce one adverse event. NNT and NNH can be modified for those in particular risk groups. The advantage of these simple numbers is that they allow much clearer communication of the magnitude of treatment effectiveness.

Shared Decision Making

Once patients understand the complex information about outcomes, they can fully participate in the decision-making process. The final step in disseminating information to patients involves an interactive process that allows patients to make informed choices about their own health-care.

Despite a growing consensus that they should be involved, evidence suggests that patients are rarely consulted. Wennberg (1995) outlined a variety of common medical decisions in which there is uncertainty. In each, treatment selection involves profiles of risks and benefits for patients. Thiazide medications can be effective at controlling blood pressure, they also can be associated with increased serum cholesterol; the benefit of blood pressure reduction must be balanced against such side effects as dizziness and impotence.

Factors that affect patient decision making and use of health services are not well understood. It is usually assumed that use of medical services is driven primarily by need, that those who are sickest or most disabled use services the most ( Aday, 1998 ). Although illness is clearly the major reason for service use, the literature on small-area variation demonstrates that there can be substantial variability in service use among communities that have comparable illness burdens and comparable insurance coverage ( Wennberg, 1998 ). Therefore, social, cultural, and system variables also contribute to service use.

The role of patients in medical decision making has undergone substantial recent change. In the early 1950s, Parsons (1951) suggested that patients were excluded from medical decision making unless they assumed the “sick role,” in which patients submit to a physician's judgment, and it is assumed that physicians understand the patients' preferences. Through a variety of changes, patients have become more active. More information is now available, and many patients demand a greater role ( Sharf, 1997 ). The Internet offers vast amounts of information to patients; some of it misleading or inaccurate ( Impicciatore et al., 1997 ). One difficulty is that many patients are not sophisticated consumers of technical medical information ( Strum, 1997 ).

Another important issue is whether patients want a role. The literature is contradictory on this point; at least eight studies have addressed the issue. Several suggest that most patients express little interest in participating ( Cassileth et al., 1980 ; Ende et al., 1989 ; Mazur and Hickam, 1997 ; Pendleton and House, 1984 ; Strull et al., 1984 ; Waterworth and Luker, 1990 ). Those studies challenge the basis of shared medical decision making. Is it realistic to engage patients in the process if they are not interested? Deber ( Deber, 1994 ; Deber et al., 1996 ) has drawn an important distinction between problem solving and decision making. Medical problem solving requires technical skill to make an appropriate diagnosis and select treatment. Most patients prefer to leave those judgments in the hands of experts ( Ende et al., 1989 ). Studies challenging the notion that patients want to make decisions typically asked questions about problem solving ( Ende et al., 1989 ; Pendleton and House, 1984 ; Strull et al., 1984 ).

Shared decision making requires patients to express personal preferences for desired outcomes, and many decisions involve very personal choices. Wennberg (1998) offers examples of variation in health care practices that are dominated by physician choice. One is the choice between mastectomy and lumpectomy for women with well-defined breast cancer. Systematic clinical trials have shown that the probability of surviving breast cancer is about equal after mastectomy and after lumpectomy followed by radiation ( Lichter et al., 1992 ). But in some areas of the United States, nearly half of women with breast cancer have mastectomies (for example, Provo, Utah); in other areas less than 2% do (for example, New Jersey; Wennberg, 1998 ). Such differences are determined largely by surgeon choice; patient preference is not considered. In the breast cancer example, interviews suggest that some women have a high preference for maintaining the breast, and others feel more comfortable having more breast tissue removed. The choices are highly personal and reflect variations in comfort with the idea of life with and without a breast. Patients might not want to engage in technical medical problem solving, but they are the only source of information about preferences for potential outcomes.

The process by which patients exercise choice can be difficult. There have been several evaluations of efforts to involve patients in decision making. Greenfield and colleagues (1985) taught patients how to read their own medical records and offered coaching on what questions to ask during encounters with physicians. In this randomized trial involving patients with peptic ulcer disease, those assigned to a 20-minute treatment had fewer functional limitations and were more satisfied with their care than were patients in the control group. A similar experiment involving patients treated for diabetes showed that patients randomly assigned to receive visit preparation scored significantly better than controls on three dimensions of health-related quality of life (mobility, role performance, physical activity). Furthermore, there were significant improvements for biochemical measures of diabetes control ( Greenfield et al., 1988 ).

Many medical decisions are more complex than those studied by Greenfield and colleagues. There are usually several treatment alternatives, and the outcomes for each choice are uncertain. Also, the importance of the outcomes might be valued differently by different people. Shared decision-making programs have been proposed to address those concerns ( Kasper et al., 1992 ). The programs usually use electronic media. Some involve interactive technologies in which a patient becomes familiar with the probabilities of various outcomes. With some technologies, the patient also has the opportunity to witness others who have embarked on different treatments. Video allows a patient to witness the outcomes of others who have made each treatment choice. A variety of interactive programs have been systematically evaluated. In one study ( Barry et al., 1995 ), patients with benign prostatic hyperplasia were given the opportunity to use an interactive video. The video was generally well received, and the authors reported that there was a significant reduction in the rate of surgery and an increase in the proportion who chose “watchful waiting” after using the decision aid. Flood et al. (1996) reported similar results with an interactive program.

Not all evaluations of decision aids have been positive. In one evaluation of an impartial video for patients with ischemic heart disease, ( Liao et al., 1996 ) 44% of the patients found it helpful for making treatment choices but more than 40% reported that it increased their anxiety ( Liao et al., 1996 ). Most of the patients had received advice from their physicians before watching the video.

Despite enthusiasm for shared medical decision making, little systematic research has evaluated interventions to promote it ( Frosch and Kaplan, 1999 ). Systematic experimental trials are needed to determine whether the use of shared decision aids enhances patient outcomes. Although decision aids appear to enhance patient satisfaction, it is unclear whether they result in reductions in surgery, as suggested by Wennberg (1998) , or in improved patient outcomes ( Frosch and Kaplan, 1999 ).

Dissemination Through Organizations

The effect of any preventive intervention depends both on its ability to influence health behavior change or reduce health risks and on the extent to which the target population has access to and participates in the program. Few preventive interventions are free-standing in the community. Rather, organizations serve as “hosts” for health promotion and disease prevention programs. Once a program has proven successful in demonstration projects and efficacy trials, it must be adopted and implemented by new organizations. Unfortunately, diffusion to new organizations often proceeds very slowly ( Murray, 1986 ; Parcel et al., 1990 ).

A staged change process has been proposed for optimal diffusion of preventive interventions to new organizations. Although different researchers have offered a variety of approaches, there is consensus on the importance of at least four stages ( Goodman et al., 1997 ):

  • dissemination, during which organizations are made aware of the programs and their benefits;
  • adoption, during which the organization commits to initiating the program;
  • implementation, during which the organization offers the program or services;
  • maintenance or institutionalization, during which the organization makes the program part of its routines and standard offerings.

Research investigating the diffusion of health behavior change programs to new organizations can be seen, for example, in adoption of prevention curricula by schools and of preventive services by medical care practices.

Schools are important because they allow consistent contact with children over their developmental trajectory and they provide a place where acquisition of new information and skills is normative ( Orlandi, 1996b ). Although much emphasis has been placed on developing effective health behavior change curricula for students throughout their school years, the literature is replete with evaluations of school-based curricula that suggest that such programs have been less than successful ( Bush et al., 1989 ; Parcel et al., 1990 ; Rohrbach et al., 1996 ; Walter, 1989 ). Challenges or barriers to effective diffusion of the programs include organizational issues, such as limited time and resources, few incentives for the organization to give priority to health issues, pressure to focus on academic curricula to improve student performance on proficiency tests, and unclear role delineation in terms of responsibility for the program; extra-organizational issues or “environmental turbulence,” such as restructuring of schools, changing school schedules or enrollments, uncertainties in public funding; and characteristics of the programs that make them incompatible with the potential host organizations, such as being too long, costly, and complex ( Rohrbach et al., 1996 ; Smith et al., 1995 ).

Initial or traditional efforts to enhance diffusion focused on the characteristics of the intervention program, but more recent studies have focused on the change process itself Two NCI-funded studies to diffuse tobacco prevention programs throughout schools in North Carolina and Texas targeted the four stages of change and were evaluated through randomized, controlled trials ( Goodman et al., 1997 ; Parcel et al., 1989 , 1995 ; Smith et al., 1995 ; Steckler et al., 1992 ). Teacher-training interventions appeared to enhance the likelihood of implementation in each study (an effect that has been replicated in other investigations; see Perry et al., 1990 ). However, other strategies (e.g., process consultation, newsletters, self-paced instructional video) were less successful at enhancing adoption and institutionalization. None of the strategies attempted to change the organizing arrangements (such as reward systems or role responsibilities) of the school districts to support continued implementation of the program.

These results suggest that further reliance on organizational change theory might greatly enhance the diffusion of programs more rapidly and thoroughly. For example, Rohrbach et al. (1996 , pp. 927–928) suggest that “change agents and school personnel should work as a team to diagnose any problems that may impede program implementation and develop action plans to address them [and that]…change agents need to promote the involvement of teachers, as well as that of key administrators, in decisions about program adoption and implementation.” These suggestions are clearly consistent with an organizational development approach. Goodman and colleagues (1997) suggest that the North Carolina intervention might have been more effective had it included more participative problem diagnosis and action planning, and had consultation been less directive and more oriented toward increasing the fit between the host organization and the program.

Medical Practices

Primary care medical practices have long been regarded as organizational settings that provide opportunities for health behavior interventions. With the growth of managed care and its financial incentives for prevention, these opportunities are even greater ( Gordon et al., 1996 ). Much effort has been invested in the development of effective programs and processes for clinical practices to accomplish health behavior change. However, the diffusion of such programs to medical practices has been slow (e.g., Anderson and May, 1995 ; Lewis, 1988 ).

Most systemic programs encourage physicians, nurses, health educators, and other members of the health-professional team to provide more consistent change-related statements and behavioral support for health-enhancing behaviors in patients ( Chapter 5 ). There might be fundamental aspects of a medical practice that support or inhibit efforts to improve health-related patient behavior ( Walsh and McPhee, 1992 ). Visual reminders to stay up-to-date on immunizations, to stop smoking cigarettes, to use bicycle helmets, and to eat a healthy diet are examples of systemic support for patient activation and self-care ( Lando et al., 1995 ). Internet support for improved self-management of diabetes has shown promise ( McKay et al., 1998 ). Automated chart reminders to ask about smoking status, update immunizations, and ensure timely cancer-screening examinations—such as Pap smears, mammography, and prostate screening—are systematic practice-based improvements that increase the rate of success in reaching stated goals on health process and health behavior measures ( Cummings et al., 1997 ). Prescription forms for specific telephone callback support can enhance access to telephone-based counseling for weight loss, smoking cessation, and exercise and can make such behavioral teaching and counseling more accessible ( Pronk and O'Connor, 1997 ). Those and other structural characteristics of clinical practices are being used and evaluated as systematic practice-based changes that can improve treatment for, and prevention of, various chronic illnesses ( O'Connor et al., 1998 ).

Barriers to diffusion include physician factors, such as lack of training, lack of time, and lack of confidence in one's prevention skills; health-care system factors, such as lack of health-care coverage and inadequate reimbursement for preventive services in fee-for-service systems; and office organization factors, such as inflexible office routines, lack of reminder systems, and unclear assignment of role responsibilities ( Thompson et al., 1995 ; Wagner et al., 1996 ).

The capitated financing of many managed-care organizations greatly reduces system barriers. Interventions that have focused solely on physician knowledge and behavior have not been very effective. Interventions that also addressed office organization factors have been more effective ( Solberg et al., 1998b ; Thompson et al., 1995 ). For example, the diffusion of the Put Prevention Into Practice (PPIP) program ( Griffith et al., 1995 ), a comprehensive federal effort, was recommended by the U.S. Preventive Services Task Force and is distributed by federal agencies and through professional associations. Using a case study approach, McVea and colleagues (1996) studied the implementation of the program in family practice settings. They found that PPIP was “used not at all or only sporadically by the practices that had ordered the kit” (p. 363). The authors suggested that the practices that provided selected preventive services did not adopt the PPIP because they did not have the organizational skills and resources to incorporate the prevention systems into their office routines without external assistance.

Descriptive research clearly indicates a need for well-conceived and methodologically-rigorous diffusion research. Many of the barriers to more rapid and effective diffusion are clearly “systems problems” ( Solberg et al., 1998b ). Thus, even though the results are somewhat mixed, recent work applying systems approaches and organizational development strategies to the diffusion dilemma is encouraging. In particular, the emphasis on building internal capacity for diffusion of the preventive interventions—for example, continuous quality improvement teams ( Solberg et al., 1998a ) and the identification and training of “program champions” within the adopting systems ( Smith et al., 1995 )—seems crucial for institutionalization of the programs.

Dissemination to Community-Based Groups

This section examines three aspects of dissemination: the need for dissemination of effective community interventions, community readiness for interventions, and the role of dissemination research.

Dissemination of Effective Community Interventions

Dissemination requires the identification of core and adaptive elements of an intervention ( Pentz et al., 1990 ; Pentz and Trebow, 1997 ; Price, 1989 ). Core elements are features of an intervention program or policy that must be replicated to maintain the integrity of the interventions as they are transferred to new settings. They include theoretically based behavior change strategies, targeting of multiple levels of influence, and the involvement of empowered community leaders ( Florin and Wandersman, 1990 ; Pentz, 1998 ). Practitioners need training in specific strategies for the transfer of core elements ( Bero et al., 1998 ; Orlandi, 1986 ). In addition, the amount of intervention delivered and its reach into the targeted population might have to be unaltered to replicate behavior change in a new setting. Research has not established a quantitative “dose” of intervention or a quantitative guide for the percentage of core elements that must be implemented to achieve behavior change. Process evaluation can provide guidance regarding the desired intensity and fidelity to intervention protocol. Botvin and colleagues (1995) , for example, found that at least half the prevention program sessions needed to be delivered to achieve the targeted effects in a youth drug abuse prevention program. They also found that increased prevention effects were associated with fidelity to the intervention protocol, which included standardized training of those implementing the program, implementation within 2 weeks of that training, and delivery of at least two program sessions or activities per week ( Botvin et al., 1995 ).

Adaptive elements are features of an intervention that can be tailored to local community, organizational, social, and economic realities of a new setting without diluting the effectiveness of the intervention ( Price, 1989 ). Adaptations might include timing and scheduling or culturally meaningful themes through which the educational and behavior change strategies are delivered.

Community and Organizational Readiness

Community and organizational factors might facilitate or hinder the adoption, implementation, and maintenance of innovative interventions. Diffusion theory assumes that the unique characteristics of the adopter (such as community, school, or worksite) interact with the specific attributes of the innovation (risk factor targets) to determine whether and when an innovation is adopted and implemented ( Emmons et al., 2000 ; Rogers, 1983 , 1995 ). Rogers (1983 , 1995) has identified characteristics that predict the adoption of innovations in communities and organizations. For example, an innovation that has a relative advantage over the idea or activity that it supersedes is more likely to be adopted. In the case of health promotion, organizations might see smoke-free worksites as having a relative advantage not only for employee health, but also for the reduction of absenteeism. An innovation that is seen as compatible with adopters' sociocultural values and beliefs, with previously introduced ideas, or with adopters' perceived needs for innovation is more likely to be implemented. The less complex, and clearer the innovation, the more likely it is to be adopted. For example, potential adopters are more likely to change their health behaviors when educators provide clear specification of the skills needed to change the behaviors. Trialability is the degree to which an innovation can be experimented with on a limited basis. In nutrition education, adopters are more likely to prepare low-fat recipes at home if they have an opportunity to taste the results in a class or supermarket and are given clear, simple directions for preparing them. Finally, observability is the degree to which the results of an innovation are visible to others. In health behavior change, an example of observability might be attention given to a health promotion program by the popular press ( Pentz, 1998 ; Rogers, 1983 ).

Dissemination Research

The ability to identify effective interventions and explain the characteristics of communities and organizations that support dissemination of those interventions provides the basic building blocks for dissemination. It is necessary, however, to learn more about how dissemination occurs to increase its effectiveness ( Pentz, 1998 ). What are the core elements of interventions, and how can they be adapted ( Price, 1989 )? How do the predictors of diffusion function in the dissemination process ( Pentz, 1998 )? What characteristics of community leaders are associated with dissemination of prevention programs? What personnel and material resources are needed to implement and maintain prevention programs? How can written materials and training in program implementation be provided to preserve fidelity to core elements ( Price, 1989 )?

Dissemination research could help identify alternatives to conceptualizing transfer of intervention technology from research to the practice setting. Rather than disseminating an exact replication of specific tested interventions, program transfer might be based on core and adaptive intervention components at both the individual and community organizational levels ( Blaine et al., 1997 ; Perry 1999 ). Dissemination might also be viewed as replicating a community-based participatory research process, or as a planning process that incorporates core components ( Perry 1999 ), rather than exact duplication of all aspects of intervention activities.

The principles of community-based participatory research presented here could be operationalized and used as criteria for examining the extent to which these dimensions were disseminated to other projects. The guidelines developed by Green and colleagues (1995) for classifying participatory research projects also could be used. Similarly, based on her research and experience with children and adolescents in school health behavior change programs, Perry (1999) developed a guidebook that outlines a 10-step process for developing communitywide health behavior programs for children and adolescents.

Facilitating Interorganizational Linkages

To address complex health issues effectively, organizations increasingly form links with one another to form either dyadic connections (pairs) or networks ( Alter and Hage, 1992 ). The potential benefits of these interorganizational collaborations include access to new information, ideas, materials, and skills; minimization of duplication of effort and services; shared responsibility for complex or controversial programs; increased power and influence through joint action; and increased options for intervention (e.g., one organization might not experience the political constraints that hamper the activities of another; Butterfoss et al., 1993 ). However, interorganizational linkages have costs. Time and resources must be devoted to the formation and maintenance of relationships. Negotiating the assessment and planning processes can take a longer time. And sometimes an organization can find that the policies and procedures of other organizations are incompatible with its own ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ).

One way a dyadic linkage between organizations can serve health-promoting goals grows out of the diffusion of innovations through organizations. An organization can serve as a “linking agent” ( Monahan and Scheirer, 1988 ), facilitating the adoption of a health innovation by organizations that are potential implementors. For example, the National Institute for Dental Research (NIDR) developed a school-based program to encourage children to use a fluoride mouth rinse to prevent caries. Rather than marketing the program directly to the schools, NIDR worked with state agencies to promote the program. In a national study, Monahan and Scheirer (1988) found that when state agencies devoted more staff to the program and located a moderate proportion of their staff in regional offices (rather than in a central office) there was likely to be a larger proportion of school districts implementing the program. Other programs, such as the Heart Partners program of the American Heart Association ( Roberts-Gray et al., 1998 ), have used the concept of linking agents to diffuse preventive interventions. Studies of these approaches attempt to identify the organizational policies, procedures, and priorities that permit the linking agent to successfully reach a large proportion of the organizations that might implement the health behavior program. However, the research in this area does not allow general conclusions or guidelines to be drawn.

Interorganizational networks are commonly used in community-wide health initiatives. Such networks might be composed of similar organizations that coordinate service delivery (often called consortia) or organizations from different sectors that bring their respective resources and expertise to bear on a complex health problem (often called coalitions). Multihospital systems or linkages among managed-care organizations and local health departments for treating sexually transmitted diseases ( Rutherford, 1998 ) are examples of consortia. The interorganizational networks used in Project ASSIST and COMMIT, major NCI initiatives to reduce the prevalence of smoking, are examples of coalitions ( U.S. Department of Health and Human Services, 1990 ).

Stage theory has been applied to the formation and performance of interorganizational networks ( Alter and Hage, 1992 ; Goodman and Wandersman, 1994 ). Various authors have posited somewhat different stages of development, but they all include: initial actions, to form the coalition; the formalization of the mission, structure, and processes of the coalition; planning, development, and implementation of programmatic activities; and accomplishment of the coalition's health goals. Stage theory suggests that different strategies are likely to facilitate success at different stages of development ( Lewin, 1951 ; Schein, 1987 ). The complexity, formalization, staffing patterns, communication and decision-making patterns, and leadership styles of the interorganizational network will affect its ability to progress toward its goals ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ; Kegler et al., 1998a , b ).

In 1993, Butterfoss and colleagues reviewed the literature on community coalitions and found “relatively little empirical evidence” (p. 315) to bring to bear on the assessment of their effectiveness. Although the use of coalitions in community-wide health promotion continues, the accumulation of evidence supporting their effectiveness is still slim. Several case studies suggest that coalitions and consortia can be successful in bringing about changes in health behaviors, health systems, and health status (e.g., Butterfoss et al., 1998 ; Fawcett et al., 1997 ; Kass and Freudenberg, 1997 ; Myers et al., 1994 ; Plough and Olafson, 1994 ). However, the conditions under which coalitions are most likely to thrive and the strategies and processes that are most likely to result in effective functioning of a coalition have not been consistently identified empirically.

Evaluation models, such as the FORECAST model ( Goodman and Wandersman, 1994 ) and the model proposed by the Work Group on Health Promotion and Community Development at the University of Kansas ( Fawcett et al., 1997 ), address the lack of systematic and rigorous evaluation of coalitions. These models provide strategies and tools for assessing coalition functioning at all stages of development, from initial formation to ultimate influence on the coalition's health goals and objectives. They are predicated on the assumption that the successful passage through each stage is necessary, but not sufficient, to ensure successful passage through the next stage. Widespread use of these and other evaluation frameworks and tools can increase the number and quality of the empirical studies of the effects of interorganizational linkages.

Orlandi (1996a) states that diffusion failures often result from a lack of fit between the proposed host organization and the intervention program. Thus, he suggests that if the purpose is to diffuse an existing program, the design of the program and the process of diffusion need to be flexible enough to adapt to the needs and resources of the organization. If the purpose is to develop and disseminate a new program, innovation development and transfer process should be integrated. Those conclusions are consistent with some of the studies reviewed above. For example, McVea et al. (1996) concluded that a “one size fits all” approach to clinical preventive systems was not likely to diffuse effectively.

  • Aday LA. Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity. Chicago: Health Administration Press; 1998.
  • Alter C, Hage J. Organisations Working Together. Newbury Park, CA: Sage; 1992.
  • Altman DG. Sustaining interventions in community systems: On the relationship between researchers and communities. Health Psychology. 1995; 14 :526–536. [ PubMed : 8565927 ]
  • Anderson LM, May DS. Has the use of cervical, breast, and colorectal cancer screening increased in the United States? American Journal of Public Health. 1995; 85 :840–842. [ PMC free article : PMC1615482 ] [ PubMed : 7762721 ]
  • Anderson NB.After the discoveries, then what? A new approach to advancing evidence-based prevention practice. Programs and abstracts from NIH Conference, Preventive Intervention Research at the Crossroads; Bethesda, MD. 1998. pp. 74–75.
  • Anderson NH, Zalinski J. Functional measurement approach to self-estimation in multiattribute evaluation. In: Anderson NH, editor. Contributions to Information Integration Theory, Vol. 1: Cognition; Vol. 2: Social; Vol. 3: Developmental. Hillsdale, NJ: Erlbaum Press; 1990. pp. 145–185.
  • Antonovsky A. The life cycle, mental health and the sense of coherence. Israel Journal of Psychiatry and Related Sciences. 1985; 22 (4):273–280. [ PubMed : 3836223 ]
  • Baker EA, Brownson CA. Defining characteristics of community-based health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community -Based Prevention Programs that Work. Gaithersburg, MD: Aspen; 1999. pp. 7–19.
  • Balestra DJ, Littenberg B. Should adult tetanus immunization be given as a single vaccination at age 65? A cost-effectiveness analysis. Journal of General Internal Medicine. 1993; 8 :405–412. [ PubMed : 8410405 ]
  • Barry MJ, Fowler FJ, Mulley AG, Henderson JV, Wennberg JE. Patient reactions to a program designed to facilitate patient participation in treatment decisions for benign prostatic hyperplasia. Medical Care. 1995; 33 :771–782. [ PubMed : 7543639 ]
  • Beery B, Nelson G. Making outcomes matter. Seattle: Group Health/Kaiser Permanente Community Foundation; 1998. Evaluating community-based health initiatives: Dilemmas, puzzles, innovations and promising directions.
  • Bennett KJ, Torrance GW. Measuring health preferences and utilities: Rating scale, time trade-off and standard gamble methods. In: Spliker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadelphia: Lippincott-Raven; 1996. pp. 235–265.
  • Berger ES, Hendee WR. The expression of health risk information. Archives of Internal Medicine. 1989; 149 :1507–1508. [ PubMed : 2742423 ]
  • Berger PL, Neuhaus RJ. To empower people: The role of mediating structures in public policy. Washington, DC: American Enterprise Institute for Public Policy Research; 1977.
  • Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal. 1998; 317 :465–468. [ PMC free article : PMC1113716 ] [ PubMed : 9703533 ]
  • Bickman L. The functions of program theory. New Directions in Program Evaluation. 1987; 33 :5–18.
  • Bigger JTJ. Antiarrhythmic treatment: An overview. American Journal of Cardiology. 1984; 53 :8B–16B. [ PubMed : 6364771 ]
  • Bishop R. Initiating empowering research? New Zealand Journal of Educational Studies. 1994; 29 :175–188.
  • Bishop R. Addressing issues of self-determination and legitimation in Kaupapa Maori research. In: Webber B, editor. Research Perspectives in Maori Education. Wellington, New Zealand: Council for Educational Research; 1996. pp. 143–160.
  • Black WC, Nease RFJ, Tosteson AN. Perceptions of breast cancer risk and screening effectiveness in women younger than 50 years of age. Journal of the National Cancer Institute. 1995; 87 :720–731. [ PubMed : 7563149 ]
  • Blaine TM, Forster JL, Hennrikus D, O'Neil S, Wolfson M, Pham H. Creating tobacco control policy at the local level: Implementation of a direct action organizing approach. Health Education and Behavior. 1997; 24 :640–651. [ PubMed : 9307899 ]
  • Botvin GJ, Baker E, Dusenbury L, Botvin EM, Diaz T. Long-term followup results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association. 1995; 273 :1106–1112. [ PubMed : 7707598 ]
  • Brown ER. Community action for health promotion: A strategy to empower individuals and communities. International Journal of Health Services. 1991; 21 :441–456. [ PubMed : 1917205 ]
  • Brown P. The role of the evaluator in comprehensive community initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 201–225.
  • Bush PJ, Zuckerman AE, Taggart VS, Theiss PK, Peleg EO, Smith SA. Cardiovascular risk factor prevention in black school children: The Know Your Body: Evaluation Project. Health Education Quarterly. 1989; 16 :215–228. [ PubMed : 2732064 ]
  • Butterfoss FD, Morrow AL, Rosenthal J, Dini E, Crews RC, Webster JD, Louis P. CINCH: An urban coalition for empowerment and action. Health Education and Behavior. 1998; 25 :212–225. [ PubMed : 9548061 ]
  • Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion. Health Education Research. 1993; 8 :315–330. [ PubMed : 10146473 ]
  • Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally; 1963.
  • Cardiac Arrhythmia Suppression Trial (CAST) Investigators. Preliminary report: Effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. New England Journal of Medicine. 1989; 321 :406–412. [ PubMed : 2473403 ]
  • Cassileth BR, Zupkis RV, Sutton-Smith K, March V. Information and participation preferences among cancer patients. Annals of Internal Medicine. 1980; 92 :832–836. [ PubMed : 7387025 ]
  • Centers for Disease Control, Agency for Toxic Substances and Disease Registry (CDC/ ATSDR). Principles of Community Engagement. Atlanta: CDC Public Health Practice Program Office; 1997.
  • Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998; 66 :7–18. [ PubMed : 9489259 ]
  • Clemen RT. Making Hard Decisions. Boston: PWS-Kent; 1991.
  • Compas BE, Haaga DF, Keefe FJ, Leitenberg H, Williams DA. Sampling of empirically supported psychological treatments from health psychology: Smoking, chronic pain, cancer, and bulimia nervosa. Journal of Consulting and Clinical Psychology. 1998; 66 :89–112. [ PubMed : 9489263 ]
  • Cook TD, Reichardt CS. Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage; 1979.
  • Cornwall A. Towards participatory practice: Participatory rural appraisal (PRA) and the participatory process. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 94–107.
  • Cornwall A, Jewkes R. What is participatory research? Social Science and Medicine. 1995; 41 :1667–1676. [ PubMed : 8746866 ]
  • Cousins JB, Earl LM, editors. Participatory Evaluation: Studies in Evaluation Use and Organizational Learning. London: Falmer; 1995.
  • Cromwell J, Bartosch WJ, Fiore MC, Hasselblad V, Baker T. Cost-effectiveness of the clinical practice recommendations in the AHCPR guideline for smoking cessation. Journal of the American Medical Association. 1997; 278 :1759–1766. [ PubMed : 9388153 ]
  • Cummings NA, Cummings JL, Johnson JN, editors. Behavioral Health in Primary Care: A Guide for Clinical Integration. Madison, CT: Psychosocial Press; 1997.
  • Danese MD, Powe NR, Sawin CT, Ladenson PW. Screening for mild thyroid failure at the periodic health examination: A decision and cost-effectiveness analysis. Journal of the American Medical Association. 1996; 276 :285–292. [ PubMed : 8656540 ]
  • Dannenberg AL, Gielen AC, Beilenson PL, Wilson MH, Joffe A. Bicycle helmet laws and educational campaigns: An evaluation of strategies to increase children's helmet use. American Journal of Public Health. 1993; 83 :667–674. [ PMC free article : PMC1694700 ] [ PubMed : 8484446 ]
  • Deber RB. Physicians in health care management. 7. The patient-physician partnership: Changing roles and the desire for information. Canadian Medical Association Journal. 1994; 151 :171–176. [ PMC free article : PMC1336877 ] [ PubMed : 8039062 ]
  • Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Archives of Internal Medicine. 1996; 156 :1414–1420. [ PubMed : 8678709 ]
  • DeJong W, Hingson R. Strategies to reduce driving under the influence of alcohol. Annual Review of Public Health. 1998; 19 :359–378. [ PubMed : 9611624 ]
  • deKoning K, Martin M. Participatory research in health: Setting the context. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 1–18.
  • Denzin NK. The research act. In: Denzin NK, editor. The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Chicago, IL: Aldine; 1970. pp. 345–360.
  • Denzin NK. The suicide machine. In: Long RE, editor. Suicide. 2. Vol. 67. New York: H.W. Wilson; 1994.
  • Dignan MB, editor. Measurement and evaluation of health education. Springfield, IL: C.C. Thomas; 1989.
  • Dockery G. Rhetoric or reality? Participatory research in the National Health Service, UK. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 164–176.
  • Donaldson SI, Graham JW, Hansen WB. Testing the generalizability of intervening mechanism theories: Understanding the effects of adloescent drug use prevention interventions. Journal of Behavioral Medicine. 1994; 17 :195–216. [ PubMed : 8035452 ]
  • Dressler WW. Commentary on “Community Research: Partnership in Black Communities.” American Journal of Preventive Medicine. 1993; 9 :32–34. [ PubMed : 8123284 ]
  • Durie MH.Characteristics of Maori health research. Presented at Hui Whakapiripiri: A Hui to Discuss Strategic Directions for Maori Health Research; Wellington, New Zealand: Eru Pomare Maori Health Research Centre, Wellington School of Medicine, University of Otago; 1996.
  • Eddy DM. Screening for cervical cancer. Annals of Internal Medicine. 1990; 113 :214–226. Reprinted in Eddy, D.M. (1991). Common Screening Tests. Philadelphia: American College of Physicians. [ PubMed : 2115753 ]
  • Edelson JT, Weinstein MC, Tosteson ANA, Williams L, Lee TH, Goldman L. Long-term cost-effectiveness of various initial monotherapies for mild to moderate hypertension. Journal of the American Medical Association. 1990; 263 :407–413. [ PubMed : 2136759 ]
  • Edworthy J, Adams AS. Warning Design. London: Taylor and Francis; 1997.
  • Elden M, Levin M. Cogenerative learning. In: Whyte WF, editor. Participatory Action Research. Newbury Park, CA: Sage; 1991. pp. 127–142.
  • Emmons KM, Thompson B, Sorensen G, Linnan L, Basen-Engquist K, Biener L, Watson M. The relationship between organizational characteristics and the adoption of workplace smoking policies. Health Education and Behavior. 2000; 27 :483–501. [ PubMed : 10929755 ]
  • Ende J, Kazis L, Ash A, Moskowitz MA. Measuring patients' desire for autonomy: Decision making and information-seeking preferences among medical patients. Journal of General Internal Medicine. 1989; 4 :23–30. [ PubMed : 2644407 ]
  • Eng E, Blanchard L. Action-oriented community diagnosis: A health education tool. International Quarterly of Community Health Education. 1990–1; 11 :93–110. [ PubMed : 20840941 ]
  • Eng E, Parker EA. Measuring community competence in the Mississippi Delta: the interface between program evaluation and empowerment. Health Education Quarterly. 1994; 21 :199–220. [ PubMed : 8021148 ]
  • Erdmann TC, Feldman KW, Rivara FP, Heimbach DM, Wall HA. Tap water burn prevention: The effect of legislation. Pediatrics. 1991; 88 :572–577. [ PubMed : 1881739 ]
  • Ericsson A, Simon HA. Verbal Protocol As Data. Cambridge, MA: MIT Press; 1994.
  • Fawcett SB, Lewis RK, Paine-Andrews A, Francisco VT, Richter KP, Williams EL, Copple B. Evaluating community coalitions for prevention of substance abuse: The case of Project Freedom. Health Education and Behavior. 1997; 24 :812–828. [ PubMed : 9408793 ]
  • Fawcett SB. Some values guiding community research and action. Journal of Applied Behavior Analysis. 1991; 24 :621–636. [ PMC free article : PMC1279615 ] [ PubMed : 16795759 ]
  • Fawcett SB, Paine-Andrews A, Francisco VT, Schultz JA, Richter KP, Lewis RK, Harris KJ, Williams EL, Berkley JY, Lopez CM, Fisher JL. Empowering community health initiatives through evaluation. In: Fetterman D, Kaftarian S, Wandersman A, editors. Empowerment Evaluation: Knowledge And Tools Of Self-Assessment And Accountability. Thousand Oaks, CA: Sage; 1996. pp. 161–187.
  • Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” American Journal of Medicine. 1997; 103 :529–535. [ PubMed : 9428837 ]
  • Fischhoff B.Risk Perception And Risk Communication. Presented at the Workshop on Health, Communications and Behavior of the IOM Committee on Health and Behavior: Research, Practice and Policy; Irvine, CA. 1999a.
  • Fischhoff B. Why (cancer) risk communication can be hard. Journal of the National Cancer Institute Monographs. 1999b; 25 :7–13. [ PubMed : 10854449 ]
  • Fischhoff B, Bruine de Bruin W. Fifty/fifty=50? Journal of Behavioral Decision Making. 1999; 12 :149–163.
  • Fischhoff B, Downs J. Accentuate the relevant. Psychological Science. 1997; 18 :154–158.
  • Fisher EB Jr. The results of the COMMIT trial. American Journal of Public Health. 1995; 85 :159–160. [ PMC free article : PMC1615304 ] [ PubMed : 7856770 ]
  • Flay B. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986; 15 :451–474. [ PubMed : 3534875 ]
  • Flood AB, Wennberg JE, Nease RFJ, Fowler FJJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. Prostate Patient Outcomes Research Team [see comments] Journal of General Internal Medicine. 1996; 11 :342–349. [ PubMed : 8803740 ]
  • Florin P, Wandersman A. An introduction to citizen participation, voluntary organizations, and community development: Insights for empowerment through research. American Journal of Community Psychology. 1990; 18 :41–53.
  • Francisco VT, Paine AL, Fawcett SB. A methodology for monitoring and evaluating community health coalitions. Health Education Research. 1993; 8 :403–416. [ PubMed : 10146477 ]
  • Freire P. Education for Critical Consciousness. New York: Continuum; 1987.
  • Frick MH, Elo O, Haapa K, Heinonen OP, Heinsalmi P, Helo P, Huttunen JK, Kaitaniemi P, Koskinen P, Manninen V, Maenpaa H, Malkonen M, Manttari M, Norola S, Pasternack A, Pikkarainen J, Romo M, Sjoblom T, Nikkila EA. Helsinki Heart Study: Primary-prevention trial with gemfibrozil in middle-aged men with dyslipidemia. Safety of treatment, changes in risk factors, and incidence of coronary heart disease. New England Journal of Medicine. 1987; 317 :1237–1245. [ PubMed : 3313041 ]
  • Friedman LM, Furberg CM, De Mets DL. Fundamentals of Clinical Trials. St. Louis: Mosby-Year Book; 1985.
  • Frosch M, Kaplan RM. Shared decision-making in clinical practice: Past research and future directions. American Journal of Preventive Medicine. 1999; 17 :285–294. [ PubMed : 10606197 ]
  • Gaventa J. The powerful, the powerless, and the experts: Knowledge struggles in an information age. In: Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research In The United States and Canada. Westport, CT: Bergin and Garvey; 1993. pp. 21–40.
  • Gentner D, Stevens A. Mental Models (Cognitive Science). Hillsdale, NJ: Erlbaum; 1983.
  • Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost-Effectiveness in Health And Medicine. New York: Oxford University Press; 1996.
  • Goldman L, Weinstein MC, Goldman PA, Williams LW. Cost-effectiveness of HMG-CoA reductase inhibition. Journal of the American Medical Association. 1991; 6 :1145–1151. [ PubMed : 1899896 ]
  • Golomb BA. Cholesterol and violence: is there a connection? Annals of Internal Medicine. 1998; 128 :478–487. [ PubMed : 9499332 ]
  • Goodman RM. Principles and tools for evaluating community-based prevention and health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community-Based Prevention Programs That Work. Gaithersburg, MD: Aspen; 1999. pp. 211–227.
  • Goodman RM, Wandersman A. FORECAST: A formative approach to evaluating community coalitions and community-based initiatives. Journal of Community Psychology, Supplement. 1994:6–25.
  • Goodman RM, Steckler A, Kegler MC. Mobilizing organizations for health enhancement: Theories of organizational change. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education. San Francisco: Jossey-Bass; 1997. pp. 287–312.
  • Gordon RL, Baker EL, Roper WL, Omenn GS. Prevention and the reforming U.S. health care system: Changing roles and responsibilities for public health. Annual Review of Public Health. 1996; 17 :489–509. [ PubMed : 8724237 ]
  • Gottlieb NH, McLeroy KR. Social health. In: O'Donnell MP, Harris JS, editors. Health promotion in the workplace. Albany, NY: Delmar; 1994. pp. 459–493.
  • Green LW. Evaluation and measurement: Some dilemmas for health education. American Journal of Public Health. 1977; 67 :155–166. [ PMC free article : PMC1653552 ] [ PubMed : 402085 ]
  • Green LW, Gordon NP. Productive research designs for health education investigations. Health-Education. 1982; 13 :4–10.
  • Green LW, Lewis FM. Measurement and Evaluation in Health Education and Health Promotion. Palo Alto, CA: Mayfield; 1986.
  • Green LW, George MA, Daniel M, Frankish CJ, Herbert CJ, Bowie WR, O'Neil M. Study of Participatory Research in Health Promotion. University of British Columbia, Vancouver: The Royal Society of Canada; 1995.
  • Green LW, Richard L, Potvin L. Ecological foundations of health promotion. American Journal of Health Promotion. 1996; 10 :270–281. [ PubMed : 10159708 ]
  • Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care. Annals of Internal Medicine. 1985; 102 :520–528. [ PubMed : 3977198 ]
  • Greenfield S, Kaplan SH, Ware JE, Yano EM, Frank HJL. Patients participation in medical care: Effects on blood sugar control and quality of life in diabetes. Journal of General Internal Medicine. 1988; 3 :448–457. [ PubMed : 3049968 ]
  • Greenwald P. Epidemiology: A step forward in the scientific approach to preventing cancer through chemoprevention. Public Health Reports. 1984; 99 :259–264. [ PMC free article : PMC1424586 ] [ PubMed : 6429723 ]
  • Greenwald P, Cullen JW. A scientific approach to cancer control. CA: A Cancer Journal for Clinicians. 1984; 34 :328–332. [ PubMed : 6437624 ]
  • Griffith HM, Dickey L, Kamerow DB. Put prevention into practice: a systematic approach. Journal of Public Health Management and Practice. 1995; 1 :9–15. [ PubMed : 10186631 ]
  • Guba EG, Lincoln YS. Fourth Generation Evaluation. Newbury Park, CA: Sage; 1989.
  • Hadden SG. Read The Label: Reducing Risk By Providing Information. Boulder, CO: Westview; 1986.
  • Hall BL. From margins to center? The development and purpose of participatory research. American Sociologist. 1992; 23 :15–28.
  • Hancock L, Sanson-Fisher RW, Redman S, Burton R, Burton L, Butler J, Girgis A, Gibberd R, Hensley M, McClintock A, Reid A, Schofield M, Tripodi T, Walsh R. Community action for health promotion: A review of methods and outcomes 1990–1995. American Journal of Preventive Medicine. 1997; 13 :229–239. [ PubMed : 9236957 ]
  • Hancock T. The healthy city from concept to application: Implications forresearch. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 14–24.
  • Hatch J, Moss N, Saran A, Presley-Cantrell L, Mallory C. Community research: partnership in Black communities. American Journal of Preventive Medicine. 1993; 9 :27–31. [ PubMed : 8123284 ]
  • He J, Ogden LG, Vupputuri S, Bazzano LA, Loria C, Whelton PK. Dietary sodium intake and subsequent risk of cardiovascular disease in overweight adults. Journal of the American Medical Association. 1999; 282 :2027–2034. [ PubMed : 10591385 ]
  • Health Care Financing Administration, Department of Health and Human Services. Highlights: National Health Expenditures, 1997. 1998. [Accessed October 31, 1998]. [On-line]. Available: http://www ​.hcfa.gov/stats ​/nhe-oact/hilites.htm .
  • Heaney CA, Goetzel RZ. A review of health-related outcomes of multi-component worksite health promotion programs. American Journal of Health Promotion. 1997; 11 :290–307. [ PubMed : 10165522 ]
  • Hingson R. Prevention of drinking and driving. Alcohol Health and Research World. 1996; 20 :219–226. [ PMC free article : PMC6876524 ] [ PubMed : 31798161 ]
  • Himmelman AT. Communities Working Collaboratively for a Change. University of Minnesota, MN: Humphrey Institute of Public Affairs; 1992.
  • Hollister RG, Hill J. Problems in the evaluation of community-wide initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 127–172.
  • Horwitz RI, Daniels SR. Bias or biology: Evaluating the epidemiologic studies of L-tryptophan and the eosinophilia-myalgia syndrome. Journal of Rheumatology Supplement. 1996; 46 :60–72. [ PubMed : 8895182 ]
  • Horwitz RI. Complexity and contradiction in clinical trial research. American Journal of Medicine. 1987a; 82 :498–510. [ PubMed : 3548349 ]
  • Horwitz RI. The experimental paradigm and observational studies of cause-effect relationships in clinical medicine. Journal of Chronic Disease. 1987b; 40 :91–99. [ PubMed : 3805237 ]
  • Horwitz RI, Singer BH, Makuch RW, Viscoli CM. Can treatment that is helpful on average be harmful to some patients? A study of the conflicting information needs of clinical inquiry and drug regulation. Journal of Clinical Epidemiology. 1996; 49 :395–400. [ PubMed : 8621989 ]
  • Horwitz RI, Viscoli CM, Clemens JD, Sadock RT. Developing improved observational methods for evaluating therapeutic effectiveness. American Journal of Medicine. 1990; 89 :630–638. [ PubMed : 1978566 ]
  • House ER. Evaluating with validity. Beverly Hills, CA: Sage; 1980.
  • Hugentobler MK, Israel BA, Schurman SJ. An action research approach to workplace health: Integrating methods. Health Education Quarterly. 1992; 19 :55–76. [ PubMed : 1568874 ]
  • Impicciatore P, Pandolfini C, Casella N, Bonati M. Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home. British Medical Journal. 1997; 314 :1875–1881. [ PMC free article : PMC2126984 ] [ PubMed : 9224132 ]
  • IOM (Institute of Medicine). Reducing the Burden of Injury: Advancing Prevention and Treatment. Washington, DC: National Academy; 1999. [ PubMed : 25101422 ]
  • IOM (Institute of Medicine). Speaking of Health: Assessing Health Communication. In: Chrvala C, Scrimshaw S, editors. Strategies for Diverse Populations. Washington, DC: National Academy Press; 2001.
  • Israel BA.Practitioner-oriented Approaches to Evaluating Health EducationInterventions: Multiple Purposes—Multiple Methods. Paper presented at the National Conference on Health Education and Health Promotion; Tampa, FL. 1994.
  • Israel BA, Schurman SJ. Social support, control and the stress process. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research and Practice. San Francisco: Jossey-Bass; 1990. pp. 179–205.
  • Israel BA, Baker EA, Goldenhar LM, Heaney CA, Schurman SJ. Occupational stress, safety, and health: Conceptual framework and principles for effective prevention interventions. Journal of Occupational Health Psychology. 1996; 1 :261–286. [ PubMed : 9547051 ]
  • Israel BA, Checkoway B, Schulz AJ, Zimmerman MA. Health education and community empowerment: conceptualizing and measuring perceptions of individual, organizational, and community control. Health Education Quarterly. 1994; 21 :149–170. [ PubMed : 8021145 ]
  • Israel BA, Cummings KM, Dignan MB, Heaney CA, Perales DP, Simons-Morton BG, Zimmerman MA. Evaluation of health education programs: Current assessment and future directions. Health Education Quarterly. 1995; 22 :364–389. [ PubMed : 7591790 ]
  • Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health. 1998; 19 :173–202. [ PubMed : 9611617 ]
  • Israel BA, Schurman SJ, House JS. Action research on occupational stress: Involving workers as researchers. International Journal of Health Services. 1989; 19 :135–155. [ PubMed : 2925298 ]
  • Israel BA, Schurman SJ, Hugentobler MK. Conducting action research: Relationships between organization members and researchers. Journal of Applied Behavioral Science. 1992a; 28 :74–101.
  • Israel BA, Schurman SJ, Hugentobler MK, House JS. A participatory action research approach to reducing occupational stress in the United States. In: DiMartino V, editor. Preventing Stress at Work: Conditions of Work Digest. II. Geneva, Switzerland: International Labor Office; 1992b. pp. 152–163.
  • James SA. Racial and ethnic differences in infant mortality and low birth weight: A psychosocial critique. Annals of Epidemiology. 1993; 3 :130–136. [ PubMed : 8269064 ]
  • Johnson-Laird PN. Cognitive Science. 6. New York: Cambridge University Press; 1980. Mental models: Towards a cognitive science of language, inference and consciousness.
  • Kahneman D, Tversky A. Choices, values, and frames. American Psychologist. 1983; 39 :341–350.
  • Kahneman D, Tversky A. On the reality of cognitive illusions. Psychological Review. 1996; 103 :582–591. [ PubMed : 8759048 ]
  • Kalet A, Roberts JC, Fletcher R. How do physicians talk with their patients about risks? Journal of General Internal Medicine. 1994; 9 :402–404. [ PubMed : 7931751 ]
  • Kaplan RM. Value judgment in the Oregon Medicaid experiment. Medical Care. 1994; 32 :975–988. [ PubMed : 7934274 ]
  • Kaplan RM. Profile versus utility based measures of outcome for clinical trials. In: Staquet MJ, Hays RD, Fayers PM, editors. Quality of Life Assessment in Clinical Trials. London: Oxford University Press; 1998. pp. 69–90.
  • Kaplan RM, Anderson JP. The general health policy model: An integrated approach. In: Spilker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadephia: Lippencott-Raven; 1996. pp. 309–322.
  • Kasper JF, Mulley AG, Wennberg JE. Developing shared decision-making programs to improve the quality of health care. Quality Review Bulletin. 1992; 18 :183–190. [ PubMed : 1379705 ]
  • Kass D, Freudenberg N. Coalition building to prevent childhood lead poisoning: A case study from New York City. In: Minkler M, editor. Community Organizing and Community Building for Health. New Brunswick, NJ: Rutgers University Press; 1997. pp. 278–288.
  • Kegler MC, Steckler A, Malek SH, McLeroy K. A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research. 1998a; 13 :225–238. [ PubMed : 10181021 ]
  • Kegler MC, Steckler A, McLeroy K, Malek SH. Factors that contribute to effective community health promotion coalitions: A study of 10 Project ASSIST coalitions in North Carolina. American Stop Smoking Intervention Study for Cancer Prevention. Health Education and Behavior. 1998b; 25 :338–353. [ PubMed : 9615243 ]
  • Klein DC. Community Dynamics and Mental Health. New York: Wiley; 1968.
  • Klitzner M. A public health/dynamic systems approach to community-wide alcohol and other drug initiatives. In: Davis RC, Lurigo AJ, Rosenbaum DP, editors. Drugs and the Community. Springfield, IL: Charles C. Thomas; 1993. pp. 201–224.
  • Koepsell TD. Epidemiologic issues in the design of community intervention trials. In: Brownson R, Petitti D, editors. Applied Epidemiology: Theory To Practice. New York: Oxford University Press; 1998. pp. 177–212.
  • Koepsell TD, Diehr PH, Cheadle A, Kristal A. Invited commentary: Symposium on community intervention trials. American Journal of Epidemiology. 1995; 142 :594–599. [ PubMed : 7653467 ]
  • Koepsell TD, Wagner EH, Cheadle AC, Patrick DL, Martin DC, Diehr PH, Perrin EB, Kristal AR, Allan-Andrilla CH, Dey LJ. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health. 1992; 13 :31–57. [ PubMed : 1599591 ]
  • Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. New England Journal of Medicine. 1986; 315 :740–744. [ PubMed : 3748081 ]
  • Kraus JF. Effectiveness of measures to prevent unintentional deaths of infants and children from suffocation and strangulation. Public Health Report. 1985; 100 :231–240. [ PMC free article : PMC1424727 ] [ PubMed : 3920722 ]
  • Kraus JF, Peek C, McArthur DL, Williams A. The effect of the 1992 California motorcycle helmet use law on motorcycle crash fatalities and injuries. Journal of the American Medical Association. 1994; 272 :1506–1511. [ PubMed : 7966842 ]
  • Krieger N. Epidemiology and the web of causation: Has anyone seen the spider? Social Science and Medicine. 1994; 39 :887–903. [ PubMed : 7992123 ]
  • Krieger N, Rowley DL, Herman AA, Avery B, Phillips MT. Racism, sexism and social class: Implications for studies of health, disease and well-being. American Journal of Preventive Medicine. 1993; 9 :82–122. [ PubMed : 8123288 ]
  • La Puma J, Lawlor EF. Quality-adjusted life-years. Ethical implications for physicians and policymakers. Journal of the American Medical Association. 1990; 263 :2917–2921. [ PubMed : 2110986 ]
  • Labonte R. Health promotion and empowerment: reflections on professionalpractice. Health Education Quarterly. 1994; 21 :253–268. [ PubMed : 8021151 ]
  • Lalonde M. A new perspective on the health of Canadians. Ottawa, ON: Ministry of Supply and Services; 1974.
  • Lando HA, Pechacek TF, Pirie PL, Murray DM, Mittelmark MB, Lichtenstein E, Nothwehyr F, Gray C. Changes in adult cigarette smoking in the Minnesota Heart Health Program. American Journal of Public Health. 1995; 85 :201–208. [ PMC free article : PMC1615309 ] [ PubMed : 7856779 ]
  • Lantz PM, House JS, Lepkowski JM, Williams DR, Mero RP, Chen J. Socioeconomic factors, health behaviors, and mortality. Journal of the American Medical Association. 1998; 279 :1703–1708. [ PubMed : 9624022 ]
  • Last J. Redefining the unacceptable. Lancet. 1995; 346 :1642–1643. [ PubMed : 8551816 ]
  • Lather P. Research as praxis. Harvard Educational Review. 1986; 56 :259–277.
  • Lenert L, Kaplan RM. Validity and interpretation of preference-based measures of health-related quality of life. Medical Care. 2000; 38 :138–150. [ PubMed : 10982099 ]
  • Leventhal H, Cameron L. Behavioral theories and the problem of compliance. Patient Education and Counseling. 1987; 10 :117–138.
  • Levine DM, Becker DM, Bone LR, Stillman FA, Tuggle MB II, Prentice M, Carter J, Filippeli J. A partnership with minority populations: A community model of effectiveness research. Ethnicity and Disease. 1992; 2 :296–305. [ PubMed : 1467764 ]
  • Lewin K. Field Theory in Social Science. New York: Harper; 1951.
  • Lewis CE. Disease prevention and health promotion practices of primary care physicians in the United States. American Journal of Preventive Medicine. 1988; 4 :9–16. [ PubMed : 3079144 ]
  • Liao L, Jollis JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. Journal of General Internal Medicine. 1996; 11 :373–376. [ PubMed : 8803746 ]
  • Lichter AS, Lippman ME, Danforth DN Jr, d'Angelo T, Steinberg SM, deMoss E, MacDonald HD, Reichert CM, Merino M, Swain SM, et al. Mastectomy versus breast-conserving therapy in the treatment of stage I and II carcinoma of the breast: A randomized trial at the National Cancer Institute. Journalof Clinical Oncokgy. 1992; 10 :976–983. [ PubMed : 1588378 ]
  • Lillie-Blanton M, Hoffman SC. Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Services Research. 1995; 30 :225–236. [ PMC free article : PMC1070051 ] [ PubMed : 7721594 ]
  • Lincoln YS, Reason P. Editor's introduction. Qualitative Inquiry. 1996; 2 :5–11.
  • Linville PW, Fischer GW, Fischhoff B. AIDS risk perceptions and decision biases. In: Pryor JB, Reeder GD, editors. The Social Psychology of HIV Infection. Hillsdale, NJ: Lawrence Erlbaum; 1993. pp. 5–38.
  • Lipid Research Clinics Program. The Lipid Research Clinics Coronary Primary Prevention Trial results. I. Reduction in incidence of coronary heart disease. Journal of the American Medical Association. 1984; 251 :351–364. [ PubMed : 6361299 ]
  • Lipkus IM, Hollands JG. The visual communication of risk. Journal of National Cancer Institute Monographs. 1999; 25 :149–162. [ PubMed : 10854471 ]
  • Lipsey MW. Theory as method: Small theories of treatments. New Direction in Program Evaluation. 1993; 57 :5–38.
  • Lipsey MW, Polard JA. Driving toward theory in program evaluation: More models to choose from. Evaluation and Program Planning. 1989; 12 :317–328.
  • Lund AK, Williams AF, Womack KN. Motorcycle helmet use in Texas. Public Health Reports. 1991; 106 :576–578. [ PMC free article : PMC1580316 ] [ PubMed : 1910193 ]
  • Maguire P. School of Education. Amherst, MA: The University of Massachusetts; 1987. Doing Participatory Research: A Feminist Approach.
  • Maguire P. Considering more feminist participatory research: What's congruency got to do with it? Qualitative Inquiry. 1996; 2 :106–118.
  • Marin G, Marin BV. Research with Hispanic Populations. Newbury Park, CA: Sage; 1991.
  • Matt GE, Navarro AM. What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review. 1997; 17 :1–32. [ PubMed : 9125365 ]
  • Mazur DJ, Hickam DH. Patients' preferences for risk disclosure and role in decision making for invasive medical procedures. Journal of General Internal Medicine. 1997; 12 :114–117. [ PMC free article : PMC1497069 ] [ PubMed : 9051561 ]
  • McGraw SA, Stone EJ, Osganian SK, Elder JP, Perry CL, Johnson CC, Parcel GS, Webber LS, Luepker RV. Design of process evaluation within the child and adolescent trial for cardiovascular health (CATCH). Health Education Quarterly. 1994:S5–S26. [ PubMed : 8113062 ]
  • McIntyre S, West P. What does the phrase “safer sex” mean to you? AIDS. 1992; 7 :121–126. [ PubMed : 8442902 ]
  • McKay HG, Feil EG, Glasgow RE, Brown JE. Feasibility and use of an internet support service for diabetes self-management. The Diabetes Educator. 1998; 24 :174–179. [ PubMed : 9555356 ]
  • McKinlay JB. The promotion of health through planned sociopolitical change: challenges for research and policy. Social Science and Medicine. 1993; 36 :109–117. [ PubMed : 8421787 ]
  • McKnight JL. Regenerating community. Social Policy. 1987; 17 :54–58.
  • McKnight JL. Politicizing health care. In: Conrad P, Kern R, editors. The Sociology Of Health And Illness: Critical Perspectives. New York: St. Martin's; 1994. pp. 437–441.
  • McVea K, Crabtree BF, Medder JD, Susman JL, Lukas L, McIlvain HE, Davis CM, Gilbert CS, Hawver M. An ounce of prevention? Evaluation of the ‘Put Prevention into Practice' program. Journal of Family Practice. 1996; 43 :361–369. [ PubMed : 8874371 ]
  • Merz J, Fischhoff B, Mazur DJ, Fischbeck PS. Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxicsand Liability. 1993; 15 :191–215.
  • Minkler M. Health education, health promotion and the open society: An historical perspective. Health Education Quarterly. 1989; 16 :17–30. [ PubMed : 2649456 ]
  • Mittelmark MB, Hunt MK, Heath GW, Schmid TL. Realistic outcomes: Lessons from community-based research and demonstration programs for the prevention of cardiovascular diseases. Journal of Public Health Policy. 1993; 14 :437–462. [ PubMed : 8163634 ]
  • Monahan JL, Scheirer MA. The role of linking agents in the diffusion of health promotion programs. Health Education Quarterly. 1988; 15 :417–434. [ PubMed : 3230017 ]
  • Morgan MG. Fields from Electric Power [brochure]. Pittsburgh, PA: Department of Engineering and Public Policy, Carnegie Mellon University; 1995.
  • Morgan MG, Fischhoff B, Bostrom A, Atman C. Risk Communication:The Mental Models Approach. New York: Cambridge University Press; 2001.
  • Mosteller F, Colditz GA. Understanding research synthesis (meta-analysis). Annual Review of Public Health. 1996; 17 :1–23. [ PubMed : 8724213 ]
  • Muldoon MF, Manuck SB, Matthews KA. Lowering cholesterol concentrations and mortality: A quantitative review of primary prevention trials. British Medical Journal. 1990; 301 :309–314. [ PMC free article : PMC1663605 ] [ PubMed : 2144195 ]
  • Murray D. Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemilogy. 1995; 142 :569–575. [ PubMed : 7653464 ]
  • Murray DM. Dissemination of community health promotion programs: The Fargo-Moorhead Heart Health Program. Journal of School Health. 1986; 56 :375–381. [ PubMed : 3640927 ]
  • Myers AM, Pfeiffle P, Hinsdale K. Building a community-based consortium for AIDS patient services. Public Health Reports. 1994; 109 :555–562. [ PMC free article : PMC1403533 ] [ PubMed : 8041856 ]
  • National Research Council, Committee on Risk Perception and Communication. Improving Risk Communication. Washington, DC: National Academy Press; 1989.
  • NHLBI (National Heart, Lung, and Blood Institute). Guidelines for Demonstration And Education Research Grants. Washington, DC: National Institutes of Health; 1983.
  • NHLBI (National Heart, Lung, and Blood Institute). Report of the Task Force on Behavioral Research in Cardiovascular, Lung, and Blood Health and Disease. Bethesda, MD: National Institutes of Health; 1998.
  • Ni H, Sacks JJ, Curtis L, Cieslak PR, Hedberg K. Evaluation of a statewide bicycle helmet law via multiple measures of helmet use. Archives of Pediatric and Adolescent Medicine. 1997; 151 :59–65. [ PubMed : 9006530 ]
  • Nyden PW, Wiewel W. Collaborative research: harnessing the tensions between researcher and practitioner. American Sociologist. 1992; 24 :43–55.
  • O'Connor PJ, Solberg LI, Baird M. The future of primary care. The enhanced primary care model. Journal of Family Practice. 1998; 47 :62–67. [ PubMed : 9673610 ]
  • Office of Technology Assessment, U.S. Congress. Cost-Effectiveness of Influenza Vaccination. Washington, DC: Office of Technology Assessment; 1981.
  • Oldenburg B, French M, Sallis JF.Health behavior research: The quality of the evidence base. Paper presented at the Society of Behavioral Medicine Twentieth Annual Meeting; San Diego, CA. 1999.
  • Orlandi MA. Health Promotion Technology Transfer: Organizational Perspectives. Canadian Journal of Public Health. 1996a; 87 (Supplement 2):528–533. [ PubMed : 9002340 ]
  • Orlandi MA. Intervening with Drug-Involved Youth: Prevention, Treatment, and Research. Newbury Park, CA: Sage Publications; 1996b. Prevention Technologies for Drug-Involved Youth; pp. 81–100.
  • Orlandi MA. The diffusion and adoption of worksite health promotion innovations: An analysis of barriers. Preventive Medicine. 1986; 15 :522–536. [ PubMed : 3774782 ]
  • Parcel GS, Eriksen MP, Lovato CY, Gottlieb NH, Brink SG, Green LW. The diffusion of school-based tobacco-use prevention programs: Program description and baseline data. Health Education Research. 1989; 4 :111–124.
  • Parcel GS, O'Hara-Tompkins NM, Harris RB, Basen-Engquist KM, McCormick LK, Gottlieb NH, Eriksen MP. Diffusion of an Effective Tobacco Prevention Program. II. Evaluation of the Adoption Phase. Health Education Research. 1995; 10 :297–307. [ PubMed : 10158027 ]
  • Parcel GS, Perry CL, Taylor WC. Beyond Demonstration: Diffusion of Health Promotion Innovations. In: Bracht N, editor. Health Promotion at the Community Level. Thousand Oaks, CA: Sage Publications; 1990. pp. 229–251.
  • Parcel GS, Simons-Morton BG, O'Hara NM, Baranowski T, Wilson B. School promotion of healthful diet and physical activity: Impact on learning outcomes and self-reported behavior. Health Education Quarterly. 1989; 16 :181–199. [ PubMed : 2732062 ]
  • Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research in the United States and Canada. Westport, CT: Bergin and Garvey; 1993.
  • Parker EA, Schulz AJ, Israel BA, Hollis R. East Side Village Health Worker Partnership: Community-based health advisor intervention in an urban area. Health Education and Behavior. 1998; 25 :24–45. [ PubMed : 9474498 ]
  • Parsons T. The Social System. Glencoe, IL: Free Press; 1951.
  • Patton MQ. How to Use Qualitative Methods In Evaluation. Newbury Park, CA: Sage Publications; 1987.
  • Patton MQ. Qualitative Evaluation And Research Methods. 2nd Edition. Newbury Park, CA: Sage Publications; 1990.
  • Pearce N. Traditional epidemiology, modern epidemiology and public health. American Journal of Public Health. 1996; 86 :678–683. [ PMC free article : PMC1380476 ] [ PubMed : 8629719 ]
  • Pendleton L, House WC. Preferences for treatment approaches in medical care. Medical Care. 1984; 22 :644–646. [ PubMed : 6748782 ]
  • Pentz MA. Programs and Abstracts. Bethesda, MD: 1998. Research to practice in community-based prevention trials. Preventive intervention research at the crossroads: contributions and opportunities from the behavioral and social sciences; pp. 82–83.
  • Pentz MA, Trebow E. Implementation issues in drug abuse prevention research. Substance Use and Misuse. 1997; 32 :1655–1660. [ PubMed : 1922302 ]
  • Pentz MA, Trebow E, Hansen WB, MacKinnon DP, Dwyer JH, Flay BR, Daniels S, Cormack C, Johnson CA. Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP). Evaluation Review. 1990; 14 :264–289.
  • Perry CL. Cardiovascular disease prevention among youth: Visioning the future. Preventive Medicine. 1999; 29 :S79–S83. [ PubMed : 10641822 ]
  • Perry CL, Murray DM, Griffin G. Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health. 1990; 60 :501–504. [ PubMed : 2283869 ]
  • Plough A, Olafson F. Implementing the Boston Healthy Start Initiative: A case study of community empowerment and public health. Health Education Quarterly. 1994; 21 :221–234. [ PubMed : 8021149 ]
  • Price RH. Prevention programming as organizational reinvention: From research to implementation. In: Silverman MM, Anthony V, editors. Prevention of MentalDisorders, Alcohol and Drug Use in Children and Adolescents. Rockville, MD: Department of Health and Human Services; 1989. pp. 97–123.
  • Price RH.Theory guided reinvention as the key high fidelity prevention practice. Paper presented at the National Institute of Health meeting, “Preventive Intervention Research at the Crossroads: Contributions and Opportunities from the Behavioral and Social Sciences”; Bethesda, MD. 1998.
  • Pronk NP, O'Connor PJ. Systems approach to population health improvement. Journal of Ambulatory Care Management. 1997; 20 :24–31. [ PubMed : 10181620 ]
  • Putnam RD. Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University; 1993.
  • Rabeneck L, Viscoli CM, Horwitz RI. Problems in the conduct and analysis of randomized clinical trials. Are we getting the right answers to the wrong questions? Archives of Internal Medicine. 1992; 152 :507–512. [ PubMed : 1546913 ]
  • Raiffa H. Decision Analysis. Reading, MA: Addison-Wesley; 1968.
  • Reason P. Three approaches to participative inquiry. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage; 1994. pp. 324–339.
  • Reason P, editor. Human Inquiry in Action: Developments in New Paradigm Research. London: Sage; 1988.
  • Reichardt CS, Cook TD. “Paradigms Lost”: Some thoughts on choosing methods in evaluation research. Evaluation and Program Planning: An International Journal. 1980; 3 :229–236.
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. First of two parts. New England Journal of Medicine. 1997a; 337 :543–548. [ PubMed : 9262499 ]
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. Second of two parts. New England Journal of Medicine. 1997b; 337 :613–618. [ PubMed : 9271485 ]
  • Roberts-Gray C, Solomon T, Gottlieb N, Kelsey E. Heart partners: A strategy for promoting effective diffusion of school health promotion programs. Journal of School Health. 1998; 68 :106–116. [ PubMed : 9608451 ]
  • Robertson A, Minkler M. New health promotion movement: A critical examination. Health Education Quarterly. 1994; 21 :295–312. [ PubMed : 8002355 ]
  • Rogers EM. Diffusion of Innovations. 3rd ed. New York: The Free Press; 1983.
  • Rogers EM. Communication of Innovations. New York: The Free Press; 1995.
  • Rogers GB. The safety effects of child-resistant packaging for oral prescription drugs. Two decades of experience. Journal of the American Medical Association. 1996; 275 :1661–1665. [ PubMed : 8637140 ]
  • Rohrbach LA, D'Onofrio C, Backer T, Montgomery S. Diffusion of school' based substance abuse prevention programs. American Behavioral Scientist. 1996; 39 :919–934.
  • Rossi PH, Freeman HE. Evaluation: A Systematic Approach. Newbury Park, CA: Sage Publications; 1989.
  • Rutherford GW. Public health, communicable diseases, and managed care: Will managed care improve or weaken communicable disease control? American Journal of Preventive Medicine. 1998; 14 :53–59. [ PubMed : 9566938 ]
  • Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997.
  • Sarason SB. The Psychological Sense of Community: Prospects for a Community Psychology. San Francisco: Jossey-Bass; 1984.
  • Schein EH. Process Consulting. Reading, MA: Addition Wesley; 1987.
  • Schensul JJ, Denelli-Hess D, Borreo MG, Bhavati MP. Urban comadronas: Maternal and child health research and policy formulation in a Puerto Rican community. In: Stull DD, Schensul JJ, editors. Collaborative Research andSocial Change: Applied Anthropology in Action. Boulder, CO: Westview; 1987. pp. 9–32.
  • Schensul SL. Science, theory and application in anthropology. American Behavioral Scientist. 1985; 29 :164–185.
  • Schneiderman LJ, Kronick R, Kaplan RM, Anderson JP, Langer RD. Effects of offering advance directives on medical treatments and costs. Annals of Internal Medicine. 1992; 117 :599–606. [ PubMed : 1524334 ]
  • Schriver KA. Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication. 1989; 32 :238–255.
  • Schulz AJ, Israel BA, Selig SM, Bayer IS. Development and implementation of principles for community-based research in public health. In: Macnair RH, editor. Research Strategies For Community Practice. New York: Haworth Press; 1998a. pp. 83–110.
  • Schulz AJ, Parker EA, Israel BA, Becker AB, Maciak B, Hollis R. Conducting a participatory community-based survey: Collecting and interpreting data for a community health intervention on Detroit's East Side. Journal of Public Health Management Practice. 1998b; 4 :10–24. [ PubMed : 10186730 ]
  • Schwartz LM, Woloshin S, Black WC, Welch HG. The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine. 1997; 127 :966–972. [ PubMed : 9412301 ]
  • Schwartz N. Self-reports: How the questions shape the answer. American Psychologist. 1999; 54 :93–105.
  • Seligman ME. Science as an ally of practice. American Psychologist. 1996; 51 :1072–1079. [ PubMed : 8870544 ]
  • Shadish WR, Cook TD, Leviton LC. Foundations of Program Evaluation. Newbury Park, CA: Sage Publications; 1991.
  • Shadish WR, Matt GE, Navarro AM, Siegle G, Crits-Christoph P, Hazelrigg MD, Jorm AF, Lyons LC, Nietzel MT, Prout HT, Robinson L, Smith ML, Svartberg M, Weiss B. Evidence that therapy works in clinically representative conditions. Journal of Consulting and Clinical Psychology. 1997; 65 :355–365. [ PubMed : 9170759 ]
  • Sharf BF. Communicating breast cancer on-line: Support and empowerment on the internet. Women and Health. 1997; 26 :65–83. [ PubMed : 9311100 ]
  • Simons-Morton BG, Green WA, Gottlieb N. Health Education and Health Promotion. Prospect Heights, IL: Waveland; 1995.
  • Simons-Morton BG, Parcel GP, Baranowski T, O'Hara N, Forthofer R. Promoting a healthful diet and physical activity among children: Results of a school-based intervention study. American Journal of Public Health. 1991; 81 :986–991. [ PMC free article : PMC1405714 ] [ PubMed : 1854016 ]
  • Singer M. Knowledge for use: Anthropology and community-centered substanceabuse research. Social Science and Medicine. 1993; 37 :15–25. [ PubMed : 8332920 ]
  • Singer M. Community-centered praxis: Toward an alternative non-dominative applied anthropology. Human Organization. 1994; 53 :336–344.
  • Smith DW, Steckler A, McCormick LK, McLeroy KR. Lessons learned about disseminating health curricula to schools. Journal of Health Education. 1995; 26 :37–43.
  • Smithies J, Adams L. Walking the tightrope. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 55–70.
  • Solberg LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Preventive Medicine. 1998a; 27 :623–631. [ PubMed : 9672958 ]
  • Solberg LI, Kottke TE, Brekke ML, Conn SA, Calomeni CA, Conboy KS. Delivering clinical preventive services is a systems problem. Annals of Behavioral Medicine. 1998b; 19 :271–278. [ PubMed : 9603701 ]
  • Sorensen G, Emmons K, Hunt MK, Johnston D. Implications of the results of community intervention trials. Annual Rreview of Public Health. 1998a; 19 :379–416. [ PubMed : 9611625 ]
  • Sorensen G, Thompson B, Basen-Engquist K, Abrams D, Kuniyuki A, DiClemente C, Biener L. Durability, dissemination and institutionalization of worksite tobacco control programs: Results from the Working Well Trial. International Journal of Behavioral Medicine. 1998b; 5 :335–351. [ PubMed : 16250700 ]
  • Spilker B. Quality of Life and Pharmacoeconomics. In: Spilker B, editor. Clinical Trials. Philadelphia: Lippincott-Raven; 1996.
  • Steckler A, Goodman RM, McLeroy KR, Davis S, Koch G. Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion. 1992; 6 :214–224. [ PubMed : 10148679 ]
  • Steckler AB, Dawson L, Israel BA, Eng E. Community health development: An overview of the works of Guy W. Steuart. Health Education Quarterly. 1993;(Suppl. 1):S3–S20. [ PubMed : 8354649 ]
  • Steckler AB, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly. 1992; 19 :1–8. [ PubMed : 1568869 ]
  • Steuart GW. Social and cultural perspectives: Community intervention and mental health. Health Education Quarterly. 1993:S99. [ PubMed : 8354654 ]
  • Stokols D. Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist. 1992; 47 :6–22. [ PubMed : 1539925 ]
  • Stokols D. Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion. 1996; 10 :282–298. [ PubMed : 10159709 ]
  • Stone EJ, McGraw SA, Osganian SK, Elder JP. Process evaluation in the multicenter Child and Adolescent Trial for Cardiovascular Health (CATCH). Health Education Quarterly. 1994;(Suppl. 2):1–143. [ PubMed : 8113062 ]
  • Stringer ET. Action Research: A Handbook For Practitioners. Thousand Oaks, CA: Sage; 1996.
  • Strull WM, Lo B, Charles G. Do patients want to participate in medical decision making? Journal of the American Medical Association. 1984; 252 :2990–2994. [ PubMed : 6502860 ]
  • Strum S. Consultation and patient information on the Internet: The patients' forum. British Journal of Urology. 1997; 80 :22–26. [ PubMed : 9415081 ]
  • Susser M. The tribulations of trials-intervention in communities. American Journal of Public Health. 1995; 85 :156–158. [ PMC free article : PMC1615322 ] [ PubMed : 7856769 ]
  • Susser M. Choosing a future for epidemiology. I. Eras and paradigms. American Journal of Public Health. 1996a; 86 :668–673. [ PMC free article : PMC1380474 ] [ PubMed : 8629717 ]
  • Susser M, Susser E. From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health. 1996b; 86 :674–677. [ PMC free article : PMC1380475 ] [ PubMed : 8629718 ]
  • Tandon R. Participatory evaluation and research: Main concepts and issues. In: Fernandes W, Tandon R, editors. Participatory Research and Evaluation. New Delhi: Indian Social Institute; 1981. pp. 15–34.
  • Thomas SB, Morgan CH. Evaluation of community-based AIDS education and risk reduction projects in ethnic and racial minority communities. Evaluation and Program Planning. 1991; 14 :247–255.
  • Thompson DC, Nunn ME, Thompson RS, Rivara FP. Effectiveness of bicycle safety helmets in preventing serious facial injury. Journal of the American Medical Association. 1996a; 276 :1974–1975. [ PubMed : 8971067 ]
  • Thompson DC, Rivara FP, Thompson RS. Effectiveness of bicycle safety helmets in preventing head injuries: A case-control study. Journal of the American Medical Association. 1996b; 276 :1968–1973. [ PubMed : 8971066 ]
  • Thompson RS, Taplin SH, McAfee TA, Mandelson MT, Smith AE. Primary and secondary prevention services in clinical practice. Twenty years' experience in development, implementation, and evaluation. Journal of the American Medical Association. 1995; 273 :1130–1135. [ PubMed : 7707602 ]
  • Torrance GW. Toward a utility theory foundation for health status index models. Health Services Research. 1976; 11 :349–369. [ PMC free article : PMC1071938 ] [ PubMed : 1025050 ]
  • Tversky A, Fox CR. Weighing risk and uncertainty. Psychological Review. 1995; 102 :269–283.
  • Tversky A, Kahneman D. Rational choice and the framing of decisions. In: Bell DE, Raiffa H, Tversky A, editors. Decision Making: Descriptive, Normative, And Prescriptive Interactions. Cambridge: Cambridge University Press; 1988. pp. 167–192.
  • Tversky A, Shafir E. The disjunction effect in choice under uncertainty. Psychological Science. 1992; 3 :305–309.
  • U.S. Department of Health and Human Services. Status Report. Washington, DC: NIH Publication #90-3107; 1990. Smoking, Tobacco, and CancerProgram: 1985–1989.
  • Vega WA. Theoretical and pragmatic implications of cultural diversity for community research. American Journal of Community Psychology. 1992; 20 :375–391.
  • Von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. New York: Cambridge University Press; 1986.
  • Wagner E, Austin B, Von Korff M. Organizing care for patients with chronic illness. Millbank Quarterly. 1996; 76 :511–544. [ PubMed : 8941260 ]
  • Wallerstein N. Powerlessness, empowerment, and health: implications for health promotion programs. American Journal of Health Promotion. 1992; 6 :197–205. [ PubMed : 10146784 ]
  • Walsh JME, McPhee SJ. A systems model of clinical preventive care: An analysis of factors influencing patient and physician. Health Education Quarterly. 1992; 19 :157–175. [ PubMed : 1618625 ]
  • Walter HJ. Primary prevention of chronic disease among children: The school-based “Know Your Body Intervention Trials.” Health Education Quarterly. 1989; 16 :201–214. [ PubMed : 2732063 ]
  • Waterworth S, Luker KA. Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing. 1990; 15 :971–976. [ PubMed : 2229694 ]
  • Weisz JR, Weiss B, Donenberg GR. The lab versus the clinic. Effects of child and adolescent psychotherapy. American Psychologist. 1992; 47 :1578–1585. [ PubMed : 1476328 ]
  • Wennberg JE. Shared decision making and multimedia. In: Harris LM, editor. Health and the New Media: Technologist Transforming Personal And Public Health. Mahwah, NJ: Erlbaum; 1995. pp. 109–126.
  • Wennberg JE. The Dartmouth Atlas Of Health Care In the United States. Hanover, NH: Trustees of Dartmouth College; 1998.
  • Whitehead M. The ownership of research. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and practice. New York: Routledge; 1993. pp. 83–89.
  • Williams DR, Collins C. U.S. socioeconomic and racial differences in health: patterns and explanations. Annual Review of Sociology. 1995; 21 :349–386.
  • Windsor R, Baranowski T, Clark N, Cutter G. Evaluation Of Health Promotion, Health Education And Disease Prevention Programs. Mountain View, CA: Mayfield; 1994.
  • Winkleby MA. The future of community-based cardiovascular disease intervention studies. American Journal of Public Health. 1994; 84 :1369–1372. [ PMC free article : PMC1615141 ] [ PubMed : 8092354 ]
  • Woloshin S, Schwartz LM, Byram SJ, Sox HC, Fischhoff B, Welch HG. Women's understanding of the mammography screening debate. Archives of Internal Medicine. 2000; 160 :1434–1440. [ PubMed : 10826455 ]
  • World Health Organization (WHO). Ottawa Charter for Health Promotion. Copenhagen: WHO; 1986.
  • Yates JF. Englewood Cliffs. NJ: Prentice-Hall; 1990. Judgment and Decision Making.
  • Yeich S, Levine R. Participatory research's contribution to a conceptualization of empowerment. Journal of Applied Social Psychology. 1992; 22 :1894–1908.
  • Yin RK. Applied Social Research Methods Series. Vol. 34. Newbury Park, CA: Sage Publications; 1993. Applications of case study research.
  • Zhu SH, Anderson NH. Self-estimation of weight parameter in multi-attribute analysis. Organizational Behavior and Human Decision Processes. 1991; 48 :36–54.
  • Zich J, Temoshok C. Applied methodology: A primer of pitfalls and opportunities in AIDS research. In: Feldman D, Johnson T, editors. The Social Dimensions of AIDS. New York: Praeger; 1986. pp. 41–60.
  • Cite this Page Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001. 7, Evaluating and Disseminating Intervention Research.
  • PDF version of this title (5.9M)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Evaluating and Disseminating Intervention Research - Health and Behavior Evaluating and Disseminating Intervention Research - Health and Behavior

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 70, Issue 5
  • Six steps in quality intervention development (6SQuID)
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Daniel Wight 1 ,
  • Erica Wimbush 2 ,
  • Ruth Jepson 3 ,
  • Lawrence Doi 3
  • 1 MRC/CSO Social and Public Health Sciences Unit , University of Glasgow , Glasgow , UK
  • 2 Evaluation Team , NHS Health Scotland , Edinburgh , UK
  • 3 MRC/CSO Scottish Collaboration for Public Health Research and Policy, University of Edinburgh, Edinburgh, UK
  • Correspondence to Professor Daniel Wight, MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, 200 Renfield Street, Glasgow G2 3QB, UK; d.wight{at}sphsu.mrc.ac.uk

Improving the effectiveness of public health interventions relies as much on the attention paid to their design and feasibility as to their evaluation. Yet, compared to the vast literature on how to evaluate interventions, there is little to guide researchers or practitioners on how best to develop such interventions in practical, logical, evidence based ways to maximise likely effectiveness. Existing models for the development of public health interventions tend to have a strong social-psychological, individual behaviour change orientation and some take years to implement. This paper presents a pragmatic guide to six essential Steps for Quality Intervention Development (6SQuID). The focus is on public health interventions but the model should have wider applicability. Once a problem has been identified as needing intervention, the process of designing an intervention can be broken down into six crucial steps: (1) defining and understanding the problem and its causes; (2) identifying which causal or contextual factors are modifiable: which have the greatest scope for change and who would benefit most; (3) deciding on the mechanisms of change; (4) clarifying how these will be delivered; (5) testing and adapting the intervention; and (6) collecting sufficient evidence of effectiveness to proceed to a rigorous evaluation. If each of these steps is carefully addressed, better use will be made of scarce public resources by avoiding the costly evaluation, or implementation, of unpromising interventions.

  • PUBLIC HEALTH
  • HEALTH PROMOTION

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/

https://doi.org/10.1136/jech-2015-205952

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Improving the effectiveness of public health interventions depends as much on improving their design as their evaluation. 1 Yet, compared to the vast literature on intervention evaluation, 2–5 there is little to guide researchers or practitioners on developing interventions in logical, evidence-based ways to maximise effectiveness. Poor intervention design can waste public resources through expensive evaluation or, worse, the implementation of ineffective interventions unevaluated.

Existing frameworks and guidance for the development of interventions 3 , 4 , 6–10 are summarised in table 1 . These tend to be orientated towards social-psychological, individual behaviour change and either provide little specific detail on intervention development or require great technical skills and resources. Drawing on the strengths of these existing frameworks and our own experiences, this article outlines a pragmatic six-step guide to the essential stages of intervention development to assist public health practitioners and researchers. The focus is on public health interventions, although the model should have wider applicability.

  • View inline

Existing frameworks and guidance for public health intervention development

A public health intervention is defined as planned actions to prevent or reduce a particular health problem, or the determinants of the problem, in a defined population. Most require some level of social interaction. They are rarely simple, singular actions that can be easily replicated but more often complicated (multicomponent) or complex programmes (with feedback loops and emergent outcomes) 11 that are designed to affect change at several levels of the socioecological model 12 ( table 2 ). By and large, ‘upstream’ interventions ‘require less individual effort (from recipients) and have the greatest population impact’, 13 , 14 whereas interventions requiring voluntary uptake are more likely to exacerbate health inequalities. 15

Examples of interventions, mechanisms and outcomes at different levels

Interventions are best developed through collaborations between interdisciplinary teams of practitioners, researchers, the effected population and policymakers. Such coproduction maximises the likelihood of intervention effectiveness by improving: the fit with the target group's perceived needs and thus acceptability; practicality; evaluability, including the theorising of causal pathways; and uptake by practitioners and policymakers.

Main steps in public health intervention development

Define and understand the problem and its causes.

Clarify which causal or contextual factors are malleable and have greatest scope for change.

Identify how to bring about change: the change mechanism.

Identify how to deliver the change mechanism.

Test and refine on small scale.

Collect sufficient evidence of effectiveness to justify rigorous evaluation/implementation.

1. DEFINE AND UNDERSTAND THE PROBLEM AND ITS CAUSES

Our starting point is that a public health problem has already been identified as requiring intervention. Often this results from a needs assessment, for which there are several practical guides, 8 , 18 or from a political process such as a manifesto commitment.

Clarifying the problem with stakeholders, using the existing research evidence, is the first step in intervention development. Some health problems are relatively easily defined and measured, such as the prevalence of a readily diagnosed disease, but others have several dimensions and may be perceived differently by different groups. For instance, ‘unhealthy housing’ could be attributed to poor construction, antisocial behaviour, overcrowding or lack of amenities. Definitions therefore need to be sufficiently clear and detailed to avoid ambiguity or confusion. Is ‘the problem’ a risk factor for a disease/condition (eg, smoking) or the disease/condition itself (eg, lung cancer)? If the former, it is important to be aware of the factor's importance relative to other risk factors. If this is modest even a successful intervention to change it might be insufficient to change the ultimate outcome.

Once defined one should try to establish how the problem is socially and spatially distributed, including who is currently most/least likely to benefit from an intervention. It is also important to consider what interventions or policies currently exist and why they are not deemed adequate.

Having defined the problem, one needs to understand, as far as possible, what are the immediate (proximal) and underlying (distal) influences that give rise to it. These are often suggested by the distribution of the problem, its history and relationship to the life course. It is only by understanding what shapes and perpetuates the problem (the causal pathways) that one can identify possible ways to intervene. Case study step 1 applies Funnell and Rogers’ useful questions for problem analysis to GBV (ref. 19 , p. 160). The main influences on the problem can also be classified according to the socioecological model. 12 , 20 It can often be helpful to present the various causal pathways affecting the problem diagrammatically: figure 1 attempts to do this for GBV, distinguishing different levels of the socioecological model.

  • Download figure
  • Open in new tab
  • Download powerpoint

Causal pathways perpetuating gender-based violence. 21–26

2. CLARIFY WHICH CAUSAL OR CONTEXTUAL FACTORS ARE MALLEABLE AND HAVE GREATEST SCOPE FOR CHANGE

The next step is to identify which of the immediate or underlying factors that shape a problem have the greatest scope to be changed. These might be at any point along the causal chain. For example, is it more promising to act on the factors that encourage children to start smoking (primary prevention), or to target existing smokers through smoking cessation interventions (secondary prevention)? In general, ‘upstream’ structural factors take longer and are more challenging, to modify than ‘downstream’ proximal factors, but if achieved structural changes have greatest population impact, as noted above. 13

With complex problems the causal pathways can be very diverse and interwoven. If they have been described diagrammatically in step 1, it will be easier to identify where one might intervene and, critically, whether it is necessary to intervene at more than one point, or on more than one level, to interrupt the most important causal pathways. One must also assess which changes would have most effect. Most interventions take place within systems (eg, healthcare, education, criminal justice) and exert their influence by changing relationships, displacing existing activities and redistributing and transforming resources. 27 It is necessary, therefore, to consider which system an intervention would operate in, how the system is likely to interact with the intervention and whether that system needs to be/can be modified as well. For example, a school-based intervention to improve pupils’ social and emotional well-being is likely to be affected by existing school structures, relationships and timetables and might require their modification. Interventions that address complex problems through multilevel actions are more likely to maximise synergy and long-term success. 13 The potential different levels of intervention are shown in table 2 .

In the case study of GBV, it was decided to focus on early prevention in families since: this has the potential for widespread, long-term change; it may improve other outcomes; and more proximal factors were already being addressed.

3. IDENTIFY HOW TO BRING ABOUT CHANGE: THE CHANGE MECHANISM

Having identified the most promising modifiable causal factors to address, the next step is to think through how to achieve that change. All interventions have an implicit or explicit programme theory 19 about how they are intended to bring about the desired outcomes. Central to this is the ‘change mechanism’ or ‘active ingredient’, 2 the critical process that triggers change for individuals, groups or communities (see table 2 ).

It is usually helpful to depict the programme theory diagrammatically (see http://www.theoryofchange.org/ ). Many interventions are not intended to achieve the final goal directly, but have short-term and intermediate outcomes that are expected to lead to the long-term outcomes. Ideally a range of stakeholders are involved in formulating the programme theory A common pitfall is that it is wildly optimistic, with little empirical evidence to support each link in the causal chain. For instance, a short-term change in health-related knowledge may be necessary, but it is rarely sufficient to achieve behaviour change let alone prevent the disease in question.

The best developed programme theories are based on formalised theories of behaviour change (eg, Social Cognition Theory 28 or the Theory of Reasoned Action 29 ). This is not essential, but can be very helpful if the theory has strong predictive and explanatory power. However, not many do, 30 , 31 perhaps because they often only address one causal strand (cognitions or motivation) and not socioenvironmental determinants. Furthermore, few interventions said to be based on such formalised theories clarify how the theory has been operationalised. What is crucial in intervention development is that the change mechanisms in the programme theory are clearly articulated.

The interpersonal change mechanisms for the GBV case study are shown below (there is not space to show those at the community level). Critical to their effectiveness is who delivers the intervention and their relationship with the target group.

4. IDENTIFY HOW TO DELIVER THE CHANGE MECHANISMS

Having identified the change mechanisms, step 4 requires working out how best to deliver them. As with other steps, it is helpful to involve stakeholders with the relevant practical expertise to develop the implementation plan. Sometimes change mechanisms can only be brought about through a very limited range of activities, for instance legal change is achieved through legislation. However, other change mechanisms might have several delivery options; for instance modelling new behaviours could be performed by teachers, peers or actors in TV/radio soap operas. The choice is likely to be target group-specific and context-specific.

The implementation plan requires clarifying the conditions and resources necessary for successful implementation and the related risks and assumptions. For example, if an intervention is to be delivered by health visitors, are they available everywhere and will their senior managers allow time for training and delivery? In low income countries resource constraints can seriously restrict options for delivery, for instance the existence of suitably skilled facilitators, or an ethos of voluntarism. In step 4 one should also anticipate possible unintended effects of the intervention and minimise any that might be harmful. These have been categorised by Lorenc and Oliver 32 as fivefold: direct, psychological, equity, group/social and opportunity. ‘Equity harms’ are currently of particular policy concern, the greatest beneficiaries of many behaviour change interventions being the higher educated or more affluent, thereby exacerbating inequalities in health outcomes. 15

The delivery of the change mechanisms for our case study of GBV is set out below.

5. TEST AND REFINE ON SMALL SCALE

Once the initial intervention design has been resolved, in most cases its feasibility needs to be tested and adaptations made. This varies considerably according to the type of intervention. For instance, national legislation or large-scale health protection measures, such as water fluoridation, are difficult to pilot before full implementation. Phased region by region implementation might allow incremental adjustments, but the scope for adaptation is primarily around implementation rather than the mechanism of change. With individual or community level interventions a long process of repeated testing and adaptation is often required, often called ‘formative evaluation’, especially if the intervention is novel or highly innovative.

Testing the intervention can clarify fundamental issues such as: acceptability to the target group, practitioners and delivery organisations; optimum content (eg, how participatory), structure and duration; who should deliver it and where; what training is required; and how to maximise population reach.

Frequently this is the most hurried stage of intervention development, due to lack of resources and time, but this often compromises subsequent effectiveness. Ideally incremental adaptations would each be tested separately, but in practice adaptations can be made simultaneously if sufficiently rich data are collected to enable judgements about which are helpful and which not. Practical constraints eventually force the decision that the intervention is ‘good enough’ to go to the next step. The testing and adapting of the GBV programme is set out below.

6. Collect sufficient evidence of effectiveness to justify rigorous evaluation/implementation

Before committing resources to a large scale rigorous evaluation (typically a ‘phase III’ RCT), the final step is to establish sufficient evidence of effectiveness to warrant such investment. Beyond the research world, especially in third sector organisations, inadequate resources often mean practitioners move to wide scale implementation without such rigorous evaluation. This makes step 6 all the more critical.

What is being sought at this stage is some evidence that the intervention is working as intended, it is achieving at least some short-term outcomes, and it is not having any serious unintended effects, for instance exacerbating social inequalities. It is unlikely that the evaluation design will seek to prove causality so theory-based evaluation approaches are likely to be most appropriate. There are numerous guides on evaluation that adequately cover step 6, 4 , 33 but it is worth re-stating that often the most practical way to collect evidence of effectiveness with limited resources is through a before and after survey, or by using routinely collected data. If possible, a control group greatly increases the strength of evidence. If a phase III RCT is planned, an ‘exploratory trial’ can provide valuable information about the acceptability of evaluation designs, appropriate measures and likely effect sizes to inform subsequent trials. Plans for this final step with our case study are set out below.

What is already known on this subject?

There is little practical guidance for researchers or practitioners on how best to develop public health interventions. Existing models are generally orientated towards individual behaviour change and some are highly technical and take years to implement.

What this study adds?

This paper provides a pragmatic six-step guide to develop interventions in a logical, evidence-based way to maximise likely effectiveness. If each step is carefully addressed, better use will be made of scarce public resources by avoiding the costly evaluation, or implementation, of unpromising interventions.

Acknowledgments

The authors are very grateful for seminal input from Sally Wyke (Institute of Health and Wellbeing) in developing this model and for comments on an earlier draft from Peter Craig (MRC/CSO Social and Public Health Sciences Unit), both in the University of Glasgow. The authors’ salaries came through MRC core funding (MR_UU_12017/9; 171332-01) and NHS Health Scotland. The parenting intervention to address familial predictors of GBV in Uganda is led by Godfrey Siu, Child Health and Development Centre, Makerere University and funded by the Sexual Violence Research Initiative, South Africa, and Bernard van Leer Foundation.

  • Speller V ,
  • Learmonth A ,
  • ↵ MRC . A framework for the development and evaluation of RCTs for complex interventions to improve health . London : Medical Research Council , 2000 .
  • ↵ MRC . Development and evaluation of complex interventions: new guidance . London : Medical Research Council , 2008 .
  • Shadish WR ,
  • Campbell DT
  • Bartholomew LK ,
  • Parcel GS ,
  • de Zoysa J ,
  • Habicht P ,
  • Pelto G , et al
  • Campbell NC ,
  • Darbyshire J , et al
  • ↵ CORE Group . Designing for Behaviour Change Framework . Washington: CORE Group and USAID 2008 .
  • Whitehead M ,
  • Macintyre S
  • Cavanagh S ,
  • Funnell S ,
  • McLeroy KR ,
  • Steckler A , et al
  • Ellsberg M ,
  • Gottmoeller M , et al
  • Gardner F ,
  • ↵ Population Council . Sexual and Gender Based Violence in Africa. Literature Review. Nairobi: Population Council . 2008 .
  • ↵ WHO . Violence against women: prevalence . http://www.who.int/reproductivehealth/publications/violence/VAW_Prevelance.jpeg?ua=1 (accessed Sep 2015) .
  • Hardeman W ,
  • Johnston M ,
  • Johnston D , et al
  • Riemsma RP ,
  • Pattenden J , et al
  • ↵ MRC . Guidance for process evaluation of complex interventions . London : Medical Research Council , 2014 .

Contributors DW, RJ and EW shared the original idea to develop a model of intervention development. All contributed to discussions to identify the six steps. DW wrote the first draft of this paper and all revised successive drafts.

Funding Medical Research Council (grant no. MR UU 12017/9; 171332-01); NHS Health Scotland.

Competing interests The authors declare that: the submitted work has been supported, through funding of salaries, by the UK Medical Research Council (DW, RJ, LD) and NHS Health Scotland (EW).

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

intervention plan in research

Search form

intervention plan in research

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 1. Designing Community Interventions

Chapter 18 Sections

  • Section 2. Participatory Approaches to Planning Community Interventions
  • Section 3. Identifying Targets and Agents of Change: Who Can Benefit and Who Can Help
  • Section 4. Using Community Sectors to Reach Targets and Agents of Change
  • Main Section
Adapted from "Conducting intervention research: The design and development process" by Stephen B. Fawcett et al.

You've put together a group of motivated, savvy people, who really want to make a difference in the community. Maybe you want to increase adults' physical activity and reduce risks for heart attacks; perhaps you want kids to read more and do better in school. Whatever you want to do, the end is clear enough, but the means--ah, the means are giving you nightmares. How do you reach that goal your group has set for itself? What are the best things to do to achieve it?

Generally speaking, what you're thinking about is intervening in people's environments, making it easier and more rewarding for people to change their behaviors. In the case of encouraging people's physical activity, you might provide information about opportunities, increase access to opportunities, and enhance peer support. Different ways to do this are called, sensibly enough, interventions . Comprehensive interventions combine the various components needed to make a difference.

What is an intervention?

But what exactly is an intervention? Well, what it is can vary. It might be a program , a change in policy , or a certain practice that becomes popular. What is particularly important about interventions, however is what they do. Interventions focus on people's behaviors, and how changes in the environment can support those behaviors. For example, a group might have the goal of trying to stop men from raping women.

However, it's clearly not enough to broadcast messages saying, "You shouldn't commit a rape." And so, interventions that are more successful attempt to improve the conditions that allow and encourage those behaviors to occur. So interventions that might be used to stop rape include:

  • Improving street lighting to make it easier to avoid potential attackers
  • A "safe ride" program giving free rides so people don't need to walk alone after dark
  • Skills training on date rape and how to avoid it, so that women will practice more careful decision making on dates with men they don't know well, especially in regard to using alcohol and drugs
  • Policy changes such as stronger penalties on people who commit rapes, or that simplify the process a rape victim must go through to bring the perpetrator to justice

Why should you develop interventions?

There are many strong advantages to using interventions as a means to achieve your goals. Some are very apparent; some possibly less so. Some of the more important of these advantages are:

  • By designing and implementing interventions in a clear, systematic manner, you can improve the health and well-being of your community and its residents.
  • Interventions promote understanding of the condition you are working on and its causes and solutions. Simply put, when you do something well, people notice, and the word slowly spreads. In fact, such an intervention can produce a domino effect, sparking others to understand the issue you are working on and to work on it themselves.
For example, a grade school principal in the Midwest was struck by the amount of unsupervised free time students had between three and six o'clock, when their parents got home from work. From visiting her own mother in a nursing home, she knew, too, of the loneliness felt by many residents of such homes. So she decided to try to lessen both problems by starting a "Caring Hearts" program. Students went to nursing homes to see elders after school once or twice a week to visit, play games, and exchange stories. Well, a reporter heard about the program, and did a feature article on it on the cover of the "Community Life" section of the local newspaper. The response was tremendous . Parents from all across town wanted their children involved, and similar programs were developed in several schools throughout the town.
  • To do what you are already doing better. Finally, learning to design an intervention properly is important because you are probably doing it already . Most of us working to improve the health and well-being of members of our community design (or at least run) programs, or try to change policies such as local laws or school board regulations, or try to change the things some people regularly practice. By better understanding the theories behind choosing, designing, and developing an intervention, you will improve on the work you are currently doing.

When should you develop an intervention?

It makes sense to develop or redesign an intervention when:

  • There is a community issue or problem that local people and organizations perceive as an unfilled need
  • Your organization has the resources, ability, and desire to fill that need, and
  • You have decided that your group is the appropriate one to accomplish it

The last of these three points deserves some explanation. There will always be things that your organization could do, that quite probably should be left to other organizations or individuals. For example, a volunteer crisis counseling center might find they have the ability to serve as a shelter for people needing a place to stay for a few nights. However, doing so would strain their resources and take staff and volunteers away from the primary mission of the agency.

In cases like this, where could does not equal should , your organization might want to think twice about developing a new intervention that will take away from the mission.

How do you develop an intervention?

So, people are mobilized, the coffee's hot, and you're ready to roll. Your group is ready to take on the issue--you want to design an intervention that will really improve conditions in the area. How do you start?

Decide what needs to happen

This could be a problem that needs to be solved, such as, "too many students are dropping out of school." However, it might be also a good thing, and you want to find a way to make more of it happen. For example, you might want to find a way to convince more adults to volunteer with school-aged children. At this point, you will probably want to define the problem broadly, as you will be learning more about it in the next few steps. Keep in mind these questions as you think about this:

  • What behavior needs to change?
  • Whose behavior needs to change?
  • If people are going to change their behavior, what changes in the environment need to occur to make it happen? For example, if you want people to recycle, you'll have much better results if there is easy access to recycling bins.
  • What specific changes should happen as a result of the intervention?

You don't need to have answers to all of these questions at this point. In fact, it's probably better to keep an open mind until you gather more information, including by talking with people who are affected (we'll get to that in the next few steps ). But thinking about these questions will help orient you and get you geared in the right direction.

Use a measurement system to gather information about the level of the problem

You will need to gather information about the level of the problem before you do anything to see if it is as serious as it seems, and to establish a standard for later improvement (or worsening).

Measurement instruments include:

  • Direct observations of behavior. For example, you can watch whether merchants sell alcohol to people under the age of 21.
  • Behavioral surveys. For example, the Youth Risk Behavior Survey of the U.S. Centers for Disease Control and Prevention asks questions about drug use, unprotected sexual activity, and violence.
  • Interviews with key people. For example, you might ask about changes in programs, policies, and practices that the group helped bring about.
  • Review of archival or existing records. For example, we might look at records of the rate of adolescent pregnancy, unemployment, or children living in poverty.

The group might review the level of the problem over time to detect trends--is the problem getting better or worse? It also might gather comparison information-- how are we doing compared to other, similar communities?

Decide who the intervention should help

In a childhood immunization program, your interventions would be aimed at helping children. Likewise, in a program helping people to live independently, the intervention would try to help older adults or people with disabilities. Your intervention might not be targeted at all, but be for the entire community. For example, perhaps you are trying to increase the amount of policing to make local parks safer. This change of law enforcement policy would affect people throughout the community.

Usually, interventions will target the people who will directly benefit from the intervention, but this isn't always the case. For example, a program to try to increase the number of parents and guardians who bring in their children for immunizations on time would benefit the children most directly. However, interventions wouldn't target them, since children aren't the ones making the decision. Instead, the primary "targets of change" for your interventions might be parents and health care professionals.

Before we go on, some brief definitions may be helpful. Targets of change are those people whose behavior you are trying to change. As we saw above, these people may be--but are not always--the same people who will benefit directly from the intervention. They often include others, such as public officials, who have the power to make needed changes in the environment. Agents of change are those people who can help make change occur. Examples might be local residents, community leaders, and policy makers. The "movers and the shakers," they are the ones who can make things happen--and who you definitely want to contribute to the solution.

Involve potential clients or end users of the intervention

Once you have decided broadly what should happen and who it should happen with, you need to make sure you have involved the people affected. Even if you think you know what they want--ask anyway. For your intervention to be successful, you can't have too much feedback. Some of these folks will likely have a perspective on the issue you hadn't even thought of.

Also, by asking for their help, the program becomes theirs. For example, by giving teachers and parents input in designing a "school success" intervention, they take "ownership" for the program. They become proud of it--which means they won't only use it, they?ll also support it and tell their friends, and word will spread.

Again, for ideas on how to find and choose these people, the section mentioned above on targets and agents of change may be helpful.

Identify the issues or problems you will attempt to solve together

There are a lot of ways in which you can talk with people affected about the information that interests you. Some of the more common methods include:

  • Informal personal contact - just talking with people, and seeing what they have to say
  • Focus groups
  • Community forums
  • Concerns surveys

When you are talking to people, try and get at the real issue --the one that is the underlying reason for what's going on. It's often necessary to focus not on the problem itself, but on affecting the cause of the problem.

For example, if you want to reduce the number of people in your town who are homeless, you need to find out why so many people in your town lack decent shelter: Do they lack the proper skills to get jobs? Is there a large mentally ill population that isn't receiving the help it should? Your eventual intervention may address deeper causes, seeming to have little to do with reducing homelessness directly, although that remains the goal.

Analyze these problems or the issue to be addressed in the intervention

Using the information you gathered in step five, you need to decide on answers to some important questions. These will depend on your situation, but many of the following questions might be appropriate for your purpose:

  • What factors put people at risk for (or protect them against) the problem or concern?
  • Whose behavior (or lack of behavior) caused the problem?
  • Whose behavior (or lack of behavior) maintains the problem?
  • For whom is the situation a problem?
  • What are the negative consequences for those directly affected?
  • What are the negative consequences for the community?
  • Who, if anyone, benefits from things being the way they are now?
  • How do they benefit?
  • Who should share the responsibility for solving the problem?
  • What behaviors need to change to consider the problem "solved"?
  • What conditions need to change to address the issue or problem?
  • How much change is necessary?
  • At what level(s) should the problem be addressed? Is it something that should be addressed by individuals; by families working together; by local organizations or neighborhoods; or at the level of the city, town, or broader environment?
  • Will you be able to make changes at the level(s) identified? This question includes technical capability, ensuring you have enough money to do it, and that it is going to be politically possible.

Set goals and objectives

When you have gotten this far, you are ready to set the broad goals and objectives of what the intervention will do. Remember, at this point you still have NOT decided what that intervention will be. This may seem a little backwards to your normal thinking--but we're starting from the finish line, and asking you to move backwards. Give it a try--we think it will work for you.

Specifically, you will want to answer the following questions as concretely as you can:

  • What should the intervention accomplish? For example, your goal might be for most of the homeless people who are able to hold jobs do so by the end of the intervention.
  • What will success look like? If your intervention is successful, how will you know it? How will you explain to other people that the intervention has worked? What are the "benchmarks" or indicators that show you are moving in the right direction?
  • For example, you might say, "By 2010 (when), 80% of those now homeless (who) will be successfully employed at least part time (change sought)."

Learn what others have done

Now, armed with all of the information you have found so far, you are ready to start concentrating on the specific intervention itself. The easiest way to start this is by finding out what other people in your situation have done. Don't reinvent the wheel! There might be some "best practices"-- exceptional programs or policies--out there that are close to what you want to do. It's worth taking the time to try to find them.

Where do you look for promising approaches? There are a lot of possibilities, and how exhaustive your search will be will depend on the time and resources you have (not to mention how long it takes you to find something you like!) But some of the more common resources you might start with include:

  • See what local examples are available. What has worked in your community? How about in nearby places? Can you figure out why it worked? If possible, talk to the people responsible for those approaches, and try to understand why and how they did what they did.
  • Look for examples of what has been done in articles and studies in related fields. Sources might be professional journals, such as the American Journal of Public Health, or even occasionally, general news magazines. Also, look at interventions that have been done for related problems--perhaps they can be adapted for use by your group. Information and awareness events, for example, tend to be general in nature--you can do a similar event and change what it's for. A 5-K race might be planned, for example, to raise awareness of and money for breast cancer, to protest environmental destruction, and so on.
  • National conferences. If you can, attending national meetings or conferences on the problem or issue you are trying to solve can give you excellent insight on some of the "best practices" that are out there.

Brainstorm ideas of your own

Take a sheet of paper and write down all of the possibilities you can think of. If you are deciding as a group, this could be done on poster paper attached to a wall, so everyone can see the possibilities-- this often works to help people come up with other ideas. Be creative!

Try to decide what interventions or parts of interventions have worked, and what might be applicable to your situation

What can your organization afford to do? And by afford, we mean financially, politically, time, and resource wise. For example, how much time can you put into this? Will the group lose stature in the community, or support from certain people, by doing a particular intervention?

When you are considering interventions done by others, look specifically for ones that are:

  • Appropriate - Do they fit the group's purpose?
  • Effective - Did they make a difference on behavior and outcome?
  • Replicable - Are the details and results of what happened in the original intervention explained well enough to repeat what was done? Unfortunately, this isn't always the case--many people, when you talk to them, will say, "Oh! We just did it! "
  • Simple - Is it clear enough for people in your group to do?
  • Practical - Do we have the time and money to do this?
  • Compatible with your situation - Does it fit local needs, resources, and values

Identify barriers and resistance you might come up against

What barriers and resistance might we face? How can they be overcome? Be prepared for whatever may come your way.

For example, a youth group to prevent substance use wanted to outlaw smoking on the high school campus by everyone, including the teachers and other staff members. However, they knew they would come up against resistance among teachers and staff members who smoked. How might they overcome that opposition?

Identify core components and elements of the intervention

Here is where we get to the nuts and bolts of designing an intervention.

First, decide the core components that will be used in the intervention. Much like broad strategies, these are the general things you will do as part of the intervention. They are the "big ideas" that can then be further broken down.

There are four classes of components to consider when designing your intervention:

  • Providing information and skills training
  • Enhancing support and resources
  • Modifying access and barriers
  • Monitoring and giving feedback

A comprehensive intervention will choose components for each of these four categories. For example, a youth mentoring program might choose the following components:

  • For providing information and skills training, a component might be recruitment of youth and mentors
  • For enhancing support and reinforcement, a component might be arranging celebrations among program participants
  • For modifying access and barriers, a component might be making it easier to volunteer
  • For monitoring and giving feedback, a component might be tracking the number of young people and volunteers involved

Next, decide the specific elements that compose each of the components. These elements are the distinct activities that will be done to implement the components.

For example, a comprehensive effort to prevent youth smoking might include public awareness and skills training, restricting tobacco advertising, and modifying access to tobacco products. For the component of trying to modify access, an element of this strategy might be to do 'stings' at convenience stores to see which merchants are selling tobacco illegally to teens. Another element might be to give stiffer penalties to teens who try to buy cigarettes, and to those merchants who sell.

Develop an action plan to carry out the intervention

When you are developing your action plan , you will want it to answer the following questions:

  • What components and elements will be implemented?
  • Who should implement what by when?
  • What resources and support are needed? What are available?
  • What potential barriers or resistance are expected? How will they be minimized?
  • What individuals or organizations need to be informed? What do you need to tell them?

Pilot-test your intervention

None of us likes to fall flat on our face, but frankly, it's a lot easier when there aren't very many people there to watch us, and when there isn't a lot on the line. By testing your intervention on a small scale, you have the chance to work out the bugs and get back on your feet before the crowd comes in. When doing your pilot test, you need to do the following things:

  • Decide how the intervention will be tested on a small scale
  • Evaluate your results
  • Pay particular attention to unintended consequences or side effects that you find when you evaluate your work
  • Use feedback from those who tried the intervention to simplify and refine your plan

Implement your intervention

If you have followed all of the steps above, implementing your action plan will be easier. Go to it!

Constantly monitor and evaluate your work

When the wheels are turning and things seem to be under control, congratulations! You have successfully implemented your intervention! But of course, the work never ends. It's important to see if the intervention is working , and to "tweak" it and make changes as necessary.

Designing an intervention, and doing it well, isn't necessarily an easy task. There are a lot of steps involved, and a lot of work to be done, if you are going to do it well. But by systematically going through the process, you are able to catch mistakes  before they happen; you can stand on the shoulders of those who have done this work before you and learn from their successes and failures.

Online Resources

Community Health Adviso r from the Robert Wood Johnson Foundation is a helpful online tool with detailed information about evidence-based polices and programs to reduce tobacco use and increase physical activity in communities.

The Society for Community Research and Action serves many different disciplines that are involved in strategies to improve communities. It hosts a general electronic discussion list as well as several by special interest.

The U.S. Dept. of Housing and Urban Development features " Success Stories " and gives ideas for ways to solve problems in your community.

The National Civic League provides a database of  Success Stories .

The  Pew Partnership for Civic Change  offers several resources for promising solutions for building strong communities.

The World Health Organization provides information on many types of interventions around the world.

Print Resources

Fawcett, S., Suarez, Y. Balcazar, F., White, G., Paine, A., Blanchard, K., & Embree, M. (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.),  Intervention research: Design and development for human service . (pp. 25-54). New York, NY: Haworth Press.

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Meta-Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Research Methodology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Evidence-Based Public Health (2nd edn)

  • < Previous chapter
  • Next chapter >

9 Developing an Action Plan and Implementing Interventions

  • Published: December 2010
  • Cite Icon Cite
  • Permissions Icon Permissions

Once a particular intervention—a program or policy—has been identified, sound planning techniques can ensure that the program is implemented effectively. This chapter focuses on action planning—that is, planning for a defined program or policy with specific, time-dependent outcomes compared with ongoing planning that is a regular function within an organization. The chapter is organized in five main sections, designed to highlight ecologic frameworks, give examples of behavioral science theories that can increase the likelihood of carrying out effective interventions, review key principles of planning, outline steps in action planning, and describe important aspects of coalition-based interventions.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

What is the intervention?

intervention plan in research

Ensuring Consistency and Quality of the Invention

Whether your intervention is a drug or a type of counselling, or anything else, it needs to be the same throughout the trial – and this needs careful consideration during trial design.

For example, if your trial is to test whether text messages are successful as an intervention to remind patients to take medication, your intervention is the text message. Here, it will be very easy to ensure consistency for all participants in the research: they will either receive or not receive the text message, and you can ensure that the text is the same every single time.

However, if your intervention is a type of counselling, it would be much harder to ensure consistency across all subjects, so you would probably need to create a framework so that the main elements can be consistently applied. For example, you would want to ensure that all participants had the same number of sessions and that each session was the same length, and that all counsellors in your trial were working together to ensure consistency in their approach. It may be that you would prepare specific options, such as specific applications of Cognitive Behavioural Therapy, to ensure that the participant’s experiences were as similar as possible to one another.

If your intervention is a drug, there are other things to consider. Perhaps you would like to compare two common pain relief drugs, A and B. Even if they are commonly available, you’d still need to ensure that your entire intervention supply was the same throughout the trial. The drug would also need to be correctly stored, accounted for, and managed (for example, perhaps the drug should not be exposed to temperatures of +20 degrees – how will you transport it, and ensure that temperature is maintained)? How will you ensure that the right amount of the intervention reaches your trial sites and is correctly stored there?

Interventional Trials

There are two types of intervention studies, namely randomised controlled trials and non-randomised or quasi-experimental trials.

An interventional trial quite loosely involves selecting subjects with a particular characteristics and splitting those subjects into those receiving an intervention and those that receive no intervention (control group). Here, participants (volunteers) are assigned to exposures purely based on chance.The comparison of the outcomes of the two groups at the end of the study period is an evaluation of the intervention.

Intervention studies are not limited to clinical trials but are broadly used in many research studies such as sociological, epidemiological and psychological studies as well as public health research.

Aside from the ability to remove bias, another advantage of randomised trials is that, if they are conducted properly, it is likely to determine small to moderate effects of the intervention. This is something that is difficult to establish reliably from observational studies. They also eliminate confounding bias; as such studies tends to create groups that are comparable for all factors that influence outcome that are known, unknown or difficult to measure so that the only difference between the two groups is the intervention.

They can also be used to establish the safety, cost-effectiveness and acceptability of an intervention. Some disadvantages of randomised clinical trials are that they are not always ethical as the sample size can be too small. This wastes time and patients are included in a trial that is of no benefit to them or others. The trials can also be statistically significant but clinically unimportant and lastly, the results may not be able to be generalised to the broader community since those who volunteer tend to be different from those who do not.

Double blind randomised controlled trials are considered the gold standard of clinical research because they are one of the best ways of removing bias in clinical trials. If both the participants and the researchers are blinded as to the exposure the participant is receiving, it is known as a “double-blinded” study. 

Characteristics of an Intervention Study Target Population The first step in any intervention is to specify the target population, which is the population to which the findings of the trial should be extrapolated. This requires a specific definition of the subjects prior to selection as defined by the inclusion and exclusion criteria. The exclusion criteria specify the type of patients who will need to be excluded on the basis of reasons which would confound your results – for example, they are either old or young (which may affect how the drug is working), they are pregnant and you are not yet sure if the drug is safe for pregnancy, they are in another trial at the moment, they have another medical condition that might affect their involvement – or any other reason which affect their participation. Inclusion criteria clarify who should be in the trial: for example, males and females between the age of 18-50 who have X disease…. And so on.

Those who are eventually found to be both eligible and willing to enrol in the trial compose the actual “study population” and are often relatively a selected subgroup of the experimental population. Participants in an intervention study are very likely to differ from non-participants in many ways. The fact that the subgroup of participants is representative of the entire experimental population will not affect the validity of the trial but may affect the ability to generalise those results to the target population. It is important to obtain baseline data and/or to ascertain outcomes for subjects who are eligible but unwilling to participate. Such information is extremely valuable to assess the presence and extent of differences between participants and non-participants in a trial. This will help in judging whether the results among trial participants can be generalised to the target population.

Sample Size Sample size, simply put, is the number of participants in a study. It is a basic statistical principle that sample size be defined before starting a clinical study so as to avoid bias in the interpretation of the results. If there are too few subjects in a study, the results cannot be generalised to the population as this sample will not represent the size of the target population. Further, the study may not be able to detect the difference between test groups, making the study unethical. On the other hand, if more subjects than required are enrolled in a study, we put more individuals at risk of the intervention, also making the study unethical as well as wasting precious resources. The attribute of sample size is that every individual in the chosen population should have an equal chance to be included in the sample. Also, the choice of one participant should not affect the chance of another and hence the reason for random sample selection. The calculation of an adequate sample size thus becomes crucial in any clinical study and is the process by which we calculate the optimum number of participants required to arrive at an ethically and scientifically valid result. Factors to be considered while calculating the final sample size include the expected drop-out rate, an unequal allocation ratio, and the objective and design of the study. The sample size always has to be calculated before initiating a study and as far as possible should not be changed during the course of a study.

Power It is important that an intervention study is able to detect the anticipated effect of the intervention with a high probability. To this end, the necessary sample size needs to be determined such that the power is high enough. In clinical trials, the minimal value nowadays to demonstrate adequate power equals 0.80. This means that the researcher is accepting that one in five times (that is 20%) they will miss a real difference. This false negative rate is the proportion of positive instances that were erroneously reported as negative and is referred to in statistics by the letter β. The “power” of the study is then equal to (1 –β) and is the probability of failing to detect a difference when actually there is a difference. Sometimes for pivotal or large studies, the power is occasionally set at 90% to reduce to 10% the possibility of a “false negative” result

Study end points and outcome measures To evaluate the effect of the intervention, a specific outcome needs to be chosen. In the context of clinical trials, this outcome is called the endpoint. It is advisable to choose one endpoint, the primary endpoint, to make the likelihood of measuring this accurately as high as possible. The study might also measure other outcomes, and these are secondary endpoints. Once the primary endpoint has been decided, then deciding how the outcome that provides this endpoint is measured is the central focus of the study design and operation.

The choice of the primary endpoint is critical in the design of the study. Where the trial is intended to provide pivotal evidence for regulatory approval for marketing of drugs, biologics, or devices, the primary goal typically is to obtain definitive evidence regarding the benefit-to-risk profile of the experimental intervention relative to a placebo or an existing standard-of-care treatment. One of the most challenging and controversial issues in designing such trials relates to the choice of the primary-efficacy endpoint or outcome measure used to assess benefit. Given that such trials should provide reliable evidence about benefit as well as risk, the primary-efficacy endpoints preferably should be clinical efficacy that measures unequivocally tangible benefit to patients. For example, for life-threatening diseases, one would like to determine the effect of the intervention on mortality or on a clinically significant measure of quality of life, such as relief of disease-related symptoms, improvement in ability to carry out normal activities, or reduced hospitalisation time.   In many instances, it may be possible to propose alternative endpoints (that is, “surrogates” or surrogate markers) to reduce the duration and size of the trials. A common approach has been to identify a biological marker that is “correlated” with the clinical efficacy endpoint (meaning that patients having better results for the biological marker tend to have better results for the clinical efficacy endpoint) and then to document the treatment’s effect on this biomarker. In oncology, for example, one might attempt to show that the experimental treatment regimen induces tumor shrinkage, delays tumor growth in some patients, or improves levels of biomarkers such as carcinoembryonic antigen (CEA) in colorectal cancer or prostate-specific antigen (PSA) in prostate cancer. Although these effects do not prove that the patient will derive symptom relief or prolongation of survival, such effects on the biomarker are of interest because it is well known that patients with worsening levels of these biological markers have greater risk for disease-related symptoms or death. However demonstrating treatment effects on these biological “surrogate” endpoints, while clearly establishing biological activity, may not provide reliable evidence about the effects of the intervention on clinical efficacy. In the illustration above using biomarkers for cancer treatment, if the biomarker does not lie in the pathway by which the disease process actually influences the occurrence of the clinical endpoint, then affecting the biomarker might not, in fact, affect the clinical endpoint. Also, there may be multiple pathways through which the disease process influences the risk of the clinical-efficacy endpoints. If the proposed surrogate endpoint lies in only one of these pathways and if the intervention does not actually affect all pathways, then the effect of treatment on clinical efficacy endpoints could be over- or underestimated by the effect on the proposed surrogate. In summary, a well designed trial will have one primary endpoint and possibly several secondary endpoint. The power of the study is designed to answer the question that is being measured with the outcome for the primary endpoint. Measuring this outcome needs to be standardised and its importance well understood by everyone on the study team. A well designed and set up trial is able to measure this primary outcome measure accurately and consistently between staff members, between points in time (so the same way on the first visit as on the last visit 12 months later) and also between different sites in multi-centre studies. Randomisation Randomisation offers a robust method of preventing selection bias but may be unnecessary and other designs preferable; however the conditions under which non-randomised designs can yield reliable estimates are very limited. Non randomised studies are most useful where the effects of the intervention are large or where the effects of selection, allocation and other biases are relatively small. They may be used for studying rare adverse events, which a trial would have to be implausibly large to detect.

Where simple randomisation in small trials is likely to lead to unequal distributions in small studies, the participants might be randomised in smaller blocks of, for example, four participants where there is an equal number of control and intervention allocations (in this case two of each) randomly assigned in the blocks. This means that you will not end up with a significantly unequal allocation in the study overall.

For more information on Randomisation, visit: http://www.bmj.com/content/316/7126/201

Ethical Considerations There are clear ethical consideration regarding the sample size as discussed above however, whether a study is considered to be ethical or unethical is a subjective judgement based on cultural norms, which vary from society to society and over time. Ethical considerations are more important in intervention studies than in any other type of epidemiological study.

For instance in trials involving an intervention, it will be unethical to use a placebo as a comparator if there is already an established treatment of proven value. It would also be unethical to enrol more participants than are needed to answer the question set by the trial. Conversely it would also be unethical to recruit too few participants so that the trial could not answer the question.

To be ethical a trial also needs to have equipoise – this means that the trial is answering a real question and so it is scientifically justified. This means that there’s no evidence for the intervention yet in the specific circumstances, so nobody truly knows whether it has an effect. For example, you would not be in equipoise if you were assessing paracetamol as a pain relief drug against a placebo; there is already information suggesting that paracetamol is an acceptable pain reliever for low level pain, so this research would be unethical because some patients would be given a placebo when a perfectly viable alternative is known. In this case, it might be preferable to test a new compound pain relief against paracetamol in patients with low level pain.

Therefore intervention trials are ethically justified only in a situation of uncertainty, when there is genuine doubt concerning the value of a new intervention in terms of its benefits and risks. The researcher must have some evidence that the intervention may be of benefit, for instance, from laboratory and animal studies, or from observational epidemiological studies. Otherwise, there would be no justification for conducting a trial.

Evaluating an Intervention Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a series of pilot studies targeted at each of the key uncertainties in the design, and moving on to an exploratory and then a definitive evaluation. The results should be disseminated as widely and persuasively as possible, with further research to assist and monitor the process of implementation. In practice, evaluation takes place in a wide range of settings that constrain researchers’ choice of interventions to evaluate and their choice of evaluation methods. Ideas for complex interventions emerge from various sources, including: past practice, existing evidence, policy makers or practitioners, new technology, or commercial interests. The source may have a significant impact on how much leeway the investigator has to modify the intervention or to choose an ideal evaluation design. In evaluating an intervention it is important not to rush into making a decision as strong evidence may be ignored or weak evidence rapidly taken up, depending on its political acceptability or fit with other ideas about what works. One should be cognizance of ‘blanket’ statements about what designs are suitable for what kind of intervention (e.g. ‘randomised trials are inappropriate for community-based interventions, psychiatry, surgery, etc.’). A design may rarely be used in a particular field, but that does not mean it cannot be used but the researcher will need to make a decision on the basis of specific characteristics of their study, such as expected effect size and likelihood of selection and other biases. A crucial aspect to evaluating an intervention is the choice of outcomes from the trial. The researcher will need to determine which outcomes are most important, and which are secondary as well as how to deal with multiple outcomes in the analysis. A single primary outcome, and a small number of secondary outcomes, is the most straightforward from the point of view of the statistical analysis. However, this may not represent the best use of the data. A good theoretical understanding of the intervention, derived from careful development work is key to choosing suitable outcome measures. It is equally important that a researcher remains alert to the possibility of unintended and possibly adverse consequences. Consideration should also be given to the sources of variation in outcomes and a sub group analysis may be required. As much as possible it is important to bear in mind the decision makers i.e. national or local policy-makers, opinion leaders, practitioners, patients, the public, etc and whether it is likely to be persuasive especially if it conflicts with deeply entrenched values. An economic evaluation should be included if at all possible, as this will make the results far more useful for decision-makers. Ideally, economic considerations should be taken fully into account in the design of the evaluation, to ensure that the cost of the study is justified by the potential benefit of the evidence it will generate.

Types of Randomised Clinical Designs Simple or Parallel trials is the most elementary form of randomisation and can be achieved by merely tossing a coin. However this should be discouraged in clinical studies as it cannot be reproducible or checked. The alternative is to use a table of random numbers or a computer generated randomisation list. The disadvantage of simple randomization is that it may result in markedly unequal number of subjects being allocated to each group. Also simple randomisation may lead to skewed composition of factors that may affect the outcome of the trial. For instance in a trial involving both sexes, there may be too many subjects of the same sex in one arm. This is particularly true in small studies.

Factorial trials can be used to improve efficiency in intervention trials by testing two or more hypotheses simultaneously. Some factorial studies are more complex involving a third or fourth level. The study design is such that subjects are first randomised to intervention A or B to address one hypothesis and then within each intervention, there is a further randomisation to intervention C and D to evaluate a second question. The advantage of this design is its ability to answer more than one trial question in a single trial. It also allows the researcher assess the interactions between interventions which cannot be achieved by single factor studies.

Crossover trials as the name suggests, is where each subject acts as its own control by receiving at least two interventions. Subject A receives a test and standard intervention or placebo during a period in the trial and then the order of receiving the intervention is alternated. Cross over design is not limited to two interventions but researchers can design cross over studies involving 3 interventions - 2 treatments and a control arm. The order in which each individual receives the intervention should be determined by random allocation and there should be a wash out period before the next intervention is administered to avoid any “carry over” effects. The design therefore is only suitable where the interventions have no long term effect or where the study drug has a short shelf life. Since each subject acts as its own control, the study design eliminates inter subject variability and therefore only fewer subjects are required. Cross over studies are consequently used in early phase studies such as pharmacokinetic studies.

Cluster trials This is where an intervention is allocated to groups of people or clusters against a control. Sometimes this is done by geographical area, community or health centre and mainly used to improve public health concerns. An example can be testing the effect of education versus a control in reducing deaths in subjects who have suffered a heart attack.

Adaptive design is sometimes referred to as a “flexible design” and it is a design that allows adaptations to trials and/or statistical procedures after its initiation without undermining the validity and integrity of the trial. Adaptive trial design allows for modification of the study design as data accrues. The purpose is not only to efficiently identify clinical benefits of the test treatment but also to increase the probability of success of clinical development. Some of the benefits of adaptive designs are that it reflects medical practice in the real world. It is ethical with respect to both efficacy and safety of the test treatment under investigation and therefore efficient in the early and late phases of clinical development. The main draw backs however, is a concern whether the p-value or confidence interval regarding the treatment obtained after the modification is reliable or correct. In addition, the use of adaptive design methods may lead to a totally different trial that is unable to address the scientific/medical questions that the trial sets out to answer. There is also the risk of introducing bias in subject selection or in the method the results are evaluated. In practice, commonly seen adaptations include, but are not limited to - a change in sample size or allocation to treatments, the deletion, addition, or change in treatment arms, shift in target patient population such as changes in inclusion/exclusion criteria, change in study endpoints, and change in study objectives such as the switch from a superiority to a non-inferiority trial. Prior to adopting an adaptive study, it is prudent to discuss with the regulators to ensure that the study addresses issues such as the level of modifications that will be acceptable to them as well as understand the regulatory requirements for review and approval. Adaptive trial design can be used in rare life threatening disease with unmet medical needs as it speeds up the clinical development process without compromising on safety and efficacy. Commonly considered strategies in adaptive design methods include adaptive seamless phase1/II studies. This is where several doses or schedules are run at the same time whilst dropping schedules or doses that are ineffective or toxic. Similar approaches can be used for seamless phase II/III.

Equivalence trial is where a new treatment or intervention is tested to see if it is equivalent to the current treatment. It is now becoming difficult to demonstrate that a particular intervention is better than an existing control; particularly in therapeutic areas where there has been vast improvement in the drug development process. The goal of an equivalent study is to show that the intervention is not worse, less toxic, less evasive or have some other benefit than an existing treatment. It is important however to ensure that the active control selected is an established standard treatment for the indication being studied and must be with the dose and formulation proven to be effective. Studies conducted to demonstrate benefit of the control against placebo must be sufficiently recent; such that there are no important medical advances or other changes that have occurred. Also the populations where the control was tested should be similar to those planned for the new trial and the researcher must be able to specify what they mean by equivalence at the start of the study.

Non-inferiority trial is where a new treatment or intervention is tested to see whether or not it is non-inferior to the current gold standard. The requirements for an equivalence study are similar to non inferiority studies. There should be similarities in the populations, concomitant therapy and dosage of the interventions. It is difficult to show statistically that two therapies are identical as an infinite sample size would be required. Therefore if the intervention falls sufficiently close to the standard as defined by reasonable boundaries, the intervention is deemed no worse than the control.

References Emmanuel G and Geert V “Clinical Trials and Intervention Studies” http://www.wiley.com/legacy/wileychi/eosbs/pdfs/bsa099.pdf

Intervention trials http://www.iarc.fr/en/publications/pdfs-online/epi/cancerepi/CancerEpi-7.pdf

Intervention studies http://www.drcath.net/toolkit/intervention-studies

Medical Research Council “Developing and evaluating complex interventions: new guidance” http://www.sphsu.mrc.ac.uk/Complex_interventions_guidance.pdf

Chow S and Chang Mark “Adaptive design methods in clinical trials – a review” 2008 (3) 11. Orphanet Journal of Rare Diseases http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2422839/pdf/1750-1172-3-11.pdf

Kadam P and Bhalerao S “sample size calculation” Int. Journal of Ayurveda research 2010 1 (1) 55 - 57 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/

Fleming T.R “Surrogate Endpoints And FDA’s Accelerated Approval Process” Health affairs 2005 24 (1) 67 - 78 http://content.healthaffairs.org/content/24/1/67.full

Lawrence M. Friedman, Curt D. Furberg, David L. DeMets “Study Population” Fundamentals of Clinical Trials 2010 4th edition chapter 4 55 - 65

in Articles

Systematic review of preconception risks and interventions, children of prisoners: interventions and mitigations to strengthen mental health, field trials of health interventions: a toolbox, call out for evidence of interventions on ncds in prison, involving men to improve maternal and newborn health: a systematic review of the effectiveness of interventions, process: evaluating how the prime intervention worked in practice, essential interventions, commodities and guidelines for reproductive, maternal, newborn and child health, interventions to reduce unnecessary caesarean sections in healthy women and babies, non-pharmaceutical interventions and the emergence of pathogen variants, community-based interventions for the prevention and control of infectious diseases of poverty.

Intervention and laboratory supply

You should consider what intervention supply you require and what tests and laboratory assays will be used to answer your research question. It is important to ...

Issues in study design

<p>-</p>

in Discussions

Upcoming quests webinar: the person-based approach for developing and optimising interventions.

Professor Lucy Yardley will deliver a webinar, titled “The Person-Based Approach for developing and optimising interventions”, on Tuesday 15th January 2019 from 12-1pm (GMT). This seminar will describe how to ...

The Bill & Melinda Gates Foundation & the National Natural Science Foundation of China - Grand Challenges China: New Interventions for Global Health

Grand Challenges China is focusing on calls for innovative concepts for effective and affordable interventions e.g vaccines and therapeutics which have the potential to protect against the progression or transmission ...

Develop Pharmacovigilance / Safety Reporting Plan

Develope pharmacovigilance / safety reporting plan

If your study is using an investigational medical product (IMP) you should develop a plan for reporting of adverse events and reactions. Details of ...

Webinar Recording: HOW TO SUPPORT PATIENT-PUBLIC INVOLVEMENT (PPI) CONTRIBUTORS IN THE USE OF QUALITATIVE METHODOLOGY OUTLINED

Dr Bláthín Casey and Prof Seán Dinneen delivered a QUESTS webinar on “Supporting patient-public involvement (PPI) contributors in the use of qualitative methodology: An example from the D1 Now intervention” ...

Adolescent Health and Preconception Care - Discussion for the month

Live q&a research ethics during pandemics : ask anything on research ethics".

In this live Q&A, Blessing Silaigwana will be answering any questions related to ethical conduct of research during pandemics. Please post your questions and get instant answers: #ASK ANYTHING ON ...

Documenting care in midwifery

Hi Colleagues, Midwifery students had noted during clinical experience in maternity labour ward and postnatal that there was poor record keeping and documentation. I had devised a simple tool just ...

TDR: Call for proposals

The overall objective of this call is to support a landscape analysis to better identify the knowledge gaps and lessons learned about the transmission patterns and ecology of the mosquito ...

Revising the Declaration of Helsinki - placebos and post-trial responsibilities

This blog is closed to new posts due to inactivity. The post remains here as part of the network’s archive ...

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Data and Statistics on ADHD
  • Free Materials on ADHD
  • Attention-Deficit / Hyperactivity Disorder Articles
  • Clinical Care and Treatment
  • Facts About ADHD in Adults
  • View All Home

ADHD in the Classroom: Helping Children Succeed in School

At a glance.

  • Children with attention-deficit/hyperactivity disorder (ADHD) experience more obstacles in their path to success than the average student.
  • Teachers and parents can help children with ADHD do well in school.

A teacher is helping a student in the classroom

What to know

To meet the needs of children with ADHD, schools may offer

  • ADHD treatments, such as behavioral classroom management or organizational training;
  • Special education services; or
  • Accommodations to lessen the effect of ADHD on their learning.

children sitting in a classroom

CDC funds the National Resource Center on ADHD (NRC), a program of Children and Adults with Attention Deficit/Hyperactivity Disorder (CHADD). The NRC provides resources, information, and advice for parents on how to help their child. Learn more about their services.

How schools can help children with ADHD‎

The American Academy of Pediatrics (AAP) recommends that the school environment, program, or placement is a part of any ADHD treatment plan.

AAP also recommends teacher-administered behavior therapy as a treatment for school-aged children with ADHD. You can talk to your child's healthcare provider and teachers about working together to support your child.

Classroom treatment strategies for students with ADHD

There are some school-based management strategies shown to be effective for students with ADHD: 1

  • Behavioral classroom management 1 2

Organizational training

Did you know‎, behavioral classroom management.

The behavioral classroom management approach encourages a student's positive behaviors in the classroom, through a reward system or a daily report card, and discourages their negative behaviors. This teacher-led approach has been shown to influence student behavior in a constructive manner, increasing academic engagement. Although tested mostly in elementary schools, behavioral classroom management has been shown to work for students of all ages.

Organizational training teaches children time management, planning skills, and ways to keep school materials organized in order to optimize student learning and reduce distractions. This management strategy has been tested with children and adolescents.

Teaching and supporting positive behavior‎

Special education services and accommodations.

Most children with ADHD receive some school services, such as special education services and accommodations. There are two laws that govern special services and accommodations for children with disabilities:

  • The Individuals with Disabilities Education Act (IDEA)
  • Section 504 of the Rehabilitation Act of 1973

Learn more about IDEA vs Section 504

The support a child with ADHD receives at school will depend on if they meet the eligibility requirements for one of two federal plans funded by IDEA and Section 504:

  • An Individualized Education Program (IEP), or a

What are the main differences between an IEP and a 504 Plan?‎

• 504 Plan: Provides services and changes to the learning environment to meet the needs of the child as adequately as other students and is part of Section 504 of the Rehabilitation Act.

Accommodations

IEPs and 504 Plans can offer accommodations for students to help them manage their ADHD, including the following:

  • Extra time on tests
  • Instruction and assignments tailored to the child
  • Positive reinforcement and feedback
  • Using technology to assist with tasks
  • Allowing breaks or time to move around
  • Changes to the environment to limit distraction
  • Extra help with staying organized

There is limited information about which types of accommodations are effective for children with ADHD. 3 However, there is evidence that setting clear expectations, providing immediate positive feedback, and communicating daily with parents through a daily report card can help. 4

What teachers and school administrators can do to help

For teachers, helping children manage their ADHD symptoms can present a challenge. Most children with ADHD are not enrolled in special education classes but do need extra assistance on a daily basis.

Helping students with ADHD‎

Positive discipline practices at school can help make school routines more predictable and achievable for children. Children with ADHD benefit when schools use positive rather than punitive disciplinary strategies.

Close collaboration between the school, parents, and healthcare providers will help ensure the child gets the right support. Here are some tips for classroom success:

Communication

  • Give frequent feedback and attention to positive behavior.
  • Be sensitive to the influence of ADHD on emotions, such as self-esteem issues or difficulty regulating feelings.
  • Provide extra warnings before transitions and changes in routines.
  • Understand that children with ADHD may become deeply absorbed in activities that interest them (hyper-focus) and may need extra assistance shifting their attention.

Assignments and tasks

  • Make assignments clear—check with the student to see if they understand what they need to do.
  • Provide choices to show mastery (for example, let the student choose among written essay, oral report, online quiz, or hands-on project.
  • Make sure assignments are not long and repetitive. Shorter assignments that provide a little challenge without being too hard may work well.
  • Allow breaks—for children with ADHD, paying attention takes extra effort and can be very tiring.
  • Allow time to move and exercise.
  • Minimize distractions in the classroom.
  • Use organizational tools, such as a homework folder, to limit the number of things the child has to track.

Develop a plan that fits the child

  • Observe and talk with the student about what helps or distracts them (for example, fidget tools, limiting eye contact when listening, background music, or moving while learning can be beneficial or distracting, depending on the child).
  • Communicate with parents on a regular basis.
  • Involve the school counselor or psychologist.

More information for teachers and school administrators

CHADD's National Resource Center on ADHD provides information for teachers and educators from experts on how to help students with ADHD in the classroom.

CHADD's National Resource Center on ADHD logo.

Parent education and support

How to best advocate for your child‎.

A father is saying goodbye to his daughter before she goes to school

What every parent should know

  • School support and services are regulated by laws. The U.S. Department of Education has developed a "Know your rights" letter for parents and a resource guide for educators to help educators, families, students, and other interested groups better understand how these laws apply to students with ADHD so that they can get the services and education they need to be successful.
  • Healthcare providers also play an important part in collaborating with schools to help children get the special services they need. 5

More information

  • ADHD Toolkits — Parents, Caregivers & Educators
  • Healthy and Supportive School Environments | CDC
  • Society of Clinical Child & Adolescent Psychology - Effective child therapy: ADHD
  • Center on PBIS | Students with Disabilities
  • Evans SW, Owens JS, Wymbs BT, Ray AR. Evidence-Based Psychosocial Treatments for Children and Adolescents With Attention Deficit/Hyperactivity Disorder. J Clin Child Adolesc Psychol. 2018 Mar-Apr;47(2):157-198.
  • DuPaul GJ, Chronis-Tuscano A, Danielson ML, Visser SN. Predictors of Receipt of School Services in a National Sample of Youth With ADHD. J Atten Disord. 2019 Sep;23(11):1303-1319.
  • Harrison JR, Bunford N, Evans SW, Owens JS. Educational accommodations for students with behavioral challenges: A systematic review of the literature. Review of Educational Research. 2013 Dec;83(4):551-97.
  • Moore DA, Russell AE, Matthews J, Ford TJ, Rogers M, Ukoumunne OC, Kneale D, Thompson-Coon J, Sutcliffe K, Nunns M, Shaw L. School-based interventions for attention-deficit/hyperactivity disorder: a systematic review with multiple synthesis methods. Review of Education. 2018 Oct;6(3):209-63.
  • Lipkin PH, Okamoto J; Council on Children with Disabilities; Council on School Health. The Individuals With Disabilities Education Act (IDEA) for Children With Special Educational Needs. Pediatrics. 2015 Dec;136(6):e1650-62.
  • CHADD. Education. Available at: https://chadd.org/for-parents/education/ . Accessed on November 17, 2023.
  • CHADD. Overview. Available at: https://chadd.org/for-educators/overview/ . Accessed on November 17, 2023.
  • CHADD. About the National Resource Center. Available at: https://chadd.org/about/about-nrc/ . Accessed on November 17, 2023.
  • CHADD. Individuals with Disabilities Education Act. Available at: https://chadd.org/for-parents/individuals-with-disabilities-education-act/#:~:text=What%20are%20my%20responsibilities%20as%20a%20parent%3F . Accessed on November 17, 2023.
  • U.S. Department of Education. Know Your Rights: Students with ADHD. Available at: https://www2.ed.gov/about/offices/list/ocr/docs/dcl-know-rights-201607-504.pdf . Accessed on November 17, 2023.
  • U.S. Department of Education. Dear Colleague Letter and Resource Guide on Students with ADHD. Available at: https://www2.ed.gov/about/offices/list/ocr/letters/colleague-201607-504-adhd.pdf . Accessed on November 17, 2023.
  • The American Academy of Pediatrics. How Schools Can Help Children with ADHD. Available at: https://www.healthychildren.org/English/health-issues/conditions/adhd/pages/Your-Child-At-School.aspx . Accessed on November 17, 2023.
  • ChangeLab Solutions. Developing Positive Disciplinary Strategies to Support Children with ADHD and Tourette Syndrome. Available at: https://www.changelabsolutions.org/product/positive-disciplinary-strategies-children-adhd-tourette?utm_source=ChangeLab+Solutions+Active&utm_campaign=0c815dc553-CMH-School-Disc_Launch_724&utm_medium=email&utm_term=0_-0c815dc553-%5BLIST_EMAIL_ID%5D. Accessed on July 17, 2024.
  • Society of Clinical Child & Adolescent Psychology. Inattention & Hyperactivity (ADHD). Available at: https://effectivechildtherapy.org/concerns-symptoms-disorders/disorders/inattention-and-hyperactivity-adhd/. Accessed on November 17, 2023.
  • Center on Positive Behavioral Interventions & Supports (PBIS). Students with Disabilities. Available at: https://www.pbis.org/topics/students-with-disabilities. Accessed on October 22, 2024.

Attention-Deficit / Hyperactivity Disorder (ADHD)

CDC's Attention-Deficit / Hyperactivity Disorder (ADHD) site includes information on symptoms, diagnosis, treatment, data, research, and free resources.

For Everyone

Health care providers, public health.

intervention plan in research

The impact of the herd health interventions in small ruminants in low input production systems in Ethiopia

A pastoralist milks her goat, Borana, Ethiopia (photo credit: ILRI/Zerihun Sewunet).

  • From International Livestock Research Institute (ILRI)
  • Published on 30.10.24
  • Challenges Poverty reduction, livelihoods & jobs

Share this to :

Diseases have a negative impact on production and profitability of small ruminants.

A good herd health program can decrease the number of sick animals and improve herd performance.

In a longitudinal study, small ruminant herd health interventions such as community-based strategic gastrointestinal parasite control, prevention and control of major respiratory diseases and capacity development activities were implemented.

In four districts of Ethiopia, where the Community Based Breeding Program is implemented, morbidity and mortality data were collected from January 2018 to July 2021 in 1047 smallholder farms with the objective of evaluating the impact of herd health interventions.

A total of 2643 sick animals and 516 deaths of small ruminants were recorded during the study period.

The disease cases were categorized into eight groups: gastrointestinal, neurological, reproductive, respiratory, skin, systemic, other diseases (eye disease, foot disease etc.) and unknown diseases.

Chi-square and proportions were used to analyze morbidity and mortality by district, agro-ecological zone and age of the animal.

The data showed that the general trend in the occurrence of cases and morbidity rate were decreasing from 2018 to 2021 in intervention villages.

Overall, the morbidity rate in young animals (7.36%) was higher than in adults (3.49%) and the mortality rate difference between young and adult animals was also statistically significant.

The morbidity and mortality rates varied significantly among districts and among agro-ecologies.

According to the data, treating and following up of infected animals reduced the mortality rate significantly.

The relative risk of death in treated animals after the case reported was 0.135.

Generally, the intervention impact analysis revealed that morbidity rate was significantly decreased in intervention years (6.31% in 2018 to 3.02% in 2021) and that herd health interventions provide an added value.

The herd health intervention had significant impact in reducing the morbidity rates in years and treatment and follow up of sick animals due to early reporting reduced mortality rate significantly.

It is recommended that the herd health management should be the core activity under small ruminant production programs.

Citation Moliso, M.M., Molla, W., Arke, A., Nana, T., Zewudie, F.A., Tibebu, A., Haile, A., Rekik, M., Magnusson, U., Wieland, B. and Knight-Jones, T. 2024. The impact of the herd health interventions in small ruminants in low input production systems in Ethiopia . Frontiers in Veterinary Science 11: 1371571.

Photo: A pastoralist milks her goat, Borana, Ethiopia (credit: ILRI/Zerihun Sewunet)

This website uses cookies in order to improve the use experience and provide additional functionality Detail

COMMENTS

  1. Module 3 Chapter 1: Overview of Intervention/Evaluation Research

    Intervention research typically asks questions related to the outcomes of an intervention effort or approach. However, questions also arise concerning implementation of interventions, separate from understanding their outcomes. Practical, philosophical, and scientific factors contribute to investigators' intervention study approach and design ...

  2. (PDF) Intervention as a research strategy

    a theoretical framework (T) to improve a situation (S). W hat Checkland calls the intervention strategy or. methodology is, in effect, a design proposition (in the design science sense) that ...

  3. Steps in Intervention Research: Designing and Developing Social Programs

    1. Abstract. This article describes a 5-step model of intervention research. From lessons learned in our work, we develop an outline of core. activities in designing and developing social programs ...

  4. Developing and Implementing an Intervention Study: Strategies for

    Most research methods textbooks simply describe how an intervention is incorporated into a research study design (Johnson & Christensen, 2016), but rarely discuss steps on how to plan and develop an intervention study, leaving instructors without a guide to mentor students throughout the process.

  5. Guidance on how to develop complex interventions to improve health and

    The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation.1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase.3 ...

  6. Steps in Intervention Research: Designing and Developing Social

    This article describes a 5-step model of intervention research. From lessons learned in our work, we develop an outline of core activities in designing and developing social programs. These include...

  7. What Is Intervention Research?

    Whether at the individual, organizational, state, or national level, making a difference usually involves developing and implementing some kind of action strategy. Often too, practice involves optimizing a strategy over time, that is, attempting to improve it. In social work, public health, psychology, nursing, medicine, and other professions ...

  8. Steps in intervention research: Designing and developing social programs

    This article describes a 5-step model of intervention research. From lessons learned in our work, we develop an outline of core activities in designing and developing social programs. These include (a) develop problem and program theories; (b) design program materials and measures; (c) confirm and refine program components in efficacy tests; (d) test effectiveness in a variety of practice ...

  9. The Intervention Research Framework: Background and Overview

    The Intervention Research Framework provides a scientific approach to the development of innovative and evidence-based health interventions. This type of approach to the development and testing of innovative research interventions can contribute important research evidence to both the school drug education field and other fields of study, thereby increasing the strength of evidence available ...

  10. Introduction to Intervention Research

    In this chapter, an overview of the current state in intervention research is provided. Limitations of evidence on the effectiveness of health interventions that is derived from randomized trials, in informing treatment decision-making in practice are highlighted. Disregarding the principles of client-centeredness and the complexity of practice ...

  11. Types of intervention and their development

    Go to: 1. Introduction to types of intervention and their development. This book is about the evaluation of the effectiveness of health-related interventions. We use the term 'intervention' to apply to any activity undertaken with the objective of improving human health by preventing disease, by curing or reducing the severity or duration ...

  12. A Pragmatic Approach to Guide Implementation Evaluation Research

    The team then categorized each activity as occurring in support of (1) research goals, (2) intervention delivery, or (3) implementation. (2) ... Mapping is likely to be fruitful in ensuring that all elements of an implementation research effort—including the intervention, implementation plan, and evaluation plan—have been clearly ...

  13. Enhancing the Impact of Implementation Strategies in Healthcare: A

    While the use of multifaceted and tailored implementation strategies is intuitive and has considerable face validity (), the evidence regarding their superiority to single-component strategies has been mixed (37, 39, 40).A review of 25 systematic reviews found "no compelling evidence that multifaceted interventions are more effective than single-component interventions" (p. 20).

  14. Better reporting of interventions: template for intervention

    Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of ...

  15. How to Write an Intervention Plan [+ Template]

    Set a timeline. Next, set a clear prescription for how often and how long an intervention will take place. Record a start date (when the intervention is set to begin) and a duration (the expected length of the intervention cycle). We recommend five to six weeks at a minimum so the intervention has a chance to take hold.

  16. Evaluating and Disseminating Intervention Research

    7. Evaluating and Disseminating Intervention Research. Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate. The principles of science-based interventions cannot be overemphasized.

  17. Six steps in quality intervention development (6SQuID)

    Introduction. Improving the effectiveness of public health interventions depends as much on improving their design as their evaluation.1 Yet, compared to the vast literature on intervention evaluation,2-5 there is little to guide researchers or practitioners on developing interventions in logical, evidence-based ways to maximise effectiveness. Poor intervention design can waste public ...

  18. Section 1. Designing Community Interventions

    Develop an action plan to carry out the intervention. When you are developing your action plan, you will want it to ... (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.), Intervention research: Design and development for human service. (pp. 25-54). New York, NY: Haworth Press. Home;

  19. Developing an Action Plan and Implementing Interventions

    In addition to determining the readiness of a community to engage in an intervention, it is important to consider how to adapt the intervention to the population, culture, and context of interest. 9 This requires that community members and existing community-based organizations have an active role in the research process including initial ...

  20. What is the intervention?

    An intervention is introduced immediately after the baseline period with the aim of affecting an outcome. The intervention itself is the aspect that is being manipulated in your research. Not all research has an intervention: for example, epidemiological studies are observational, and may simply be monitoring data that is already being collected.

  21. PDF Chapter 3 Intervention Research: Design and Development of The Life

    Thomas (1994:25-43) are used. One important aim of intervention research is to create the means to improve the health and well-being of community life. Figure 3.1 outlines critical operations or activities in each phase of the intervention research process and is followed by a discussion of how this was applied to the present study.

  22. ADHD in the Classroom: Helping Children Succeed in School

    Develop a plan that fits the child. Observe and talk with the student about what helps or distracts them (for example, fidget tools, limiting eye contact when listening, background music, or moving while learning can be beneficial or distracting, depending on the child). Communicate with parents on a regular basis.

  23. The impact of the herd health interventions in small ruminants in low

    Financing Plan Dashboard. This dashboard shows a summary of Funding Allocation of Portfolio and Designated to the CGIAR Research Portfolio in the current year. ... Generally, the intervention impact analysis revealed that morbidity rate was significantly decreased in intervention years (6.31% in 2018 to 3.02% in 2021) and that herd health ...