Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 3 Chapter 1: Overview of Intervention/Evaluation Research Approaches

In our prior course, you learned how the nature of an investigator’s research question dictates the type of study approach and design that might be applied to achieve the study aims. Intervention research typically asks questions related to the outcomes of an intervention effort or approach. However, questions also arise concerning implementation of interventions, separate from understanding their outcomes. Practical, philosophical, and scientific factors contribute to investigators’ intervention study approach and design decisions.

In this chapter you learn:

  • how content from our earlier course about study approaches and designs relate to intervention research;
  • additional approaches to intervention research (participatory research; formative, process, outcome, and cost-related evaluation research)
  • intervention research strategies for addressing intervention fidelity and internal validity concerns.

Review and Expansion: Study Approaches

In our earlier course you became familiar with the ways that research questions lead to research approach and methods. Intervention and evaluation research are not different: the question dictates the approach. In the earlier course, you also became familiar with the philosophical, conceptual and practical aspects of different approaches to social work research: qualitative, quantitative, and mixed methods. These methods are used in research for evaluating practice and understanding interventions, as well. The primary emphasis in this module revolves around quantitative research designs for practice evaluation and understanding interventions. However, taking a few moments to examine qualitative and mixed methods in these applications is worthwhile. Additionally, we introduce forms of participatory research—something we did not discuss regarding efforts to understand social work problems and diverse populations. Participatory research is an approach rich in social work tradition.

Qualitative methods in intervention & evaluation research.

The research questions asked by social workers about interventions often lend themselves to qualitative study approaches. Here are 5 examples.

  • Early in the process of developing an intervention, social workers might simply wish to create a rich description of the intervention, the contexts in which it is being delivered, or the clients’ experience with the intervention. This type of information is going to be critically important in developing a standardized protocol which others can use in delivering the intervention, too. Remember that qualitative methods are ideally suited for answering exploratory and descriptive questions.
  • Qualitative methods are well-suited to exploring different experiences related to diversity—the results retain individuality arising from heterogeneity rather than homogenizing across individuals to achieve a “normative” picture.
  • Qualitative methods are often used to assess the degree to which the delivery of an intervention adheres to the procedures and protocol originally designed and empirically tested. This is known as an intervention fidelity issue (see the section below on the topic of process evaluation).
  • Intervention outcomes are sometimes evaluated using qualitative approaches. For example, investigators wanted to learn from adult day service participants what they viewed as the impact of the program on their own lives (Dabelko-Schoeny & King, 2010). The value of such information is not limited to evaluating this one program. Evaluators are informed about important evaluation variables to consider in their own efforts to study interventions delivered to older adults—variables beyond the typical administrative criteria of concern. The study participants identified social connections, empowering relationships with staff, and enjoyment of activities as important evaluation criteria.
  • Assessing the need for intervention (needs assessment) is often performed with qualitative approaches, especially focus groups, open-ended surveys, and GIS mapping.
  • Qualitative approaches are an integral aspect of mixed-methods approaches.

Qualitative approaches often involve in-depth data from relatively few individuals, seeking to understand their individual experiences with an intervention. As such, these study approaches are relatively sensitive to nuanced individual differences—differences in experience that might be attributed to cultural, clinical, or other demographic diversity. This is true, however, only to the extent that diversity is represented among study participants, and individuals cannot be presumed to represent groups or populations.

Sketch of silhouettes of different people in a variety of colors

Quantitative methods in intervention & evaluation research.

Many intervention and evaluation research questions are quantitative in nature, leading investigators to adopt quantitative approaches or to integrate quantitative approaches in mixed methods research. In these instances, “how much” or “how many” questions are being asked, questions such as:

  • how much change was associated with intervention;
  • how many individuals experienced change/achieved change goals;
  • how much change was achieved in relation to the resources applied;
  • what trends in numbers were observed.

Many study designs detailed in Chapter 2 reflect the philosophical roots of quantitative research, particularly those designed to zero in on causal inferences about intervention—the explanatory research designs. Quantitative approaches are also used in descriptive and exploratory intervention and evaluation studies. By nature, quantitative studies tend to aggregate data provided by individuals, and in this way are very different from qualitative studies. Quantitative studies seek to describe what happens “on average” rather than describing individual experiences with the intervention—you learned about central tendency and variation in our earlier course (Module 4). Differences in experience related to demographic, cultural, or clinical diversity might be quantitatively assessed by comparing how the intervention was experienced by different groups (e.g., those who differ on certain demographic or clinical variables). However, data for the groups are treated in the aggregate (across individuals) with quantitative approaches.

Mixed methods in intervention & evaluation research.

Qualitative and quantitative approaches are very helpful in evaluation and intervention research as part of a mixed-methods strategy for investigating the research questions. In addition to the examples previously discussed, integrating qualitative and quantitative approaches in intervention and evaluation research is often done as means of enriching the results derived from one or the other approach. Here are 3 scenarios to consider.

  • Investigators wish to use a two-phase approach in studying or evaluating an intervention. First, they adopt a qualitative approach to inform the design of a quantitative study, then they implement the quantitative study as a second phase. The qualitative phase might help inform any aspect of the quantitative study design, including participant recruitment and retention, measurement and data collection, and presenting study results.
  • Investigators use a two-phase approach in studying or evaluating an intervention. First, they implement a quantitative study. Then, they use a qualitative approach to explore the appropriateness and adequacy of how they interpret their quantitative study results.
  • Investigators combine qualitative and quantitative approaches in a single intervention or evaluation study, allowing them to answer different kinds of questions about the intervention.

For example, a team of investigators applied a mixed methods approach in evaluating outcomes of an intensive experiential learning experience designed to prepare BSW and MSW students to engage effectively in clinical supervision (Fisher, Simmons, & Allen, 2016). BSW students provided quantitative data in response to an online survey, and MSW students provided qualitative self-assessment data. The quantitative data answered a research question about how students felt about supervision, whereas the qualitative data were analyzed for demonstrated development in critical thinking about clinical issues. The investigators concluded that their experiential learning intervention contributed to the outcomes of forming stronger supervisory alliance, BSW student satisfaction with their supervisor, and MSW students thinking about supervision as being more than an administrative task.

hand operated electric mixer

Cross-Sectional & Longitudinal Study Designs.

You are familiar with the distinction between cross-sectional and longitudinal study designs from our earlier course. In that course, we looked at these designs in terms of understanding diverse populations, social work problems, and social phenomena. Here we address how the distinction relates to the conduct of research to understand social work interventions.

  • A cross-sectional study involves data collection at just one point in time. In a program evaluation, for example, the agency might look at some outcome variable at the point when participants complete an intervention or program. Or, perhaps an agency surveys all clients at a single point in time to assess their level of need for a potential new service the agency might offer. Because the data are collected from each person at only one point in time, these are both cross-sectional studies. In terms of intervention studies, one measurement point obviously needs to be after the intervention for investigators to draw inferences about the intervention. As you will see in the discussion of intervention study designs, there exist considerable limitations to using only one single measurement to evaluate an intervention (see post-only designs in Chapter 2).
  • A longitudinal study involves data collection at two or more points in time. A great deal of intervention and evaluation research is conducted using longitudinal designs—answering questions about what changes might be associated with the intervention being delivered. For example, in program evaluation, an agency might compare how clients were functioning on certain variables at the time of discharge compared to their level of functioning at intake to the program. Because the same information is collected from each individual at two points in time (pre-intervention and post-intervention), this is a longitudinal design.
  • Distinguishing cross-section and longitudinal in studies of systems beyond the individual person can become confusing. When social workers intervene with individuals or families or small groups, that longitudinal study involves the same individuals or members at different points in time is evident—perhaps measuring individuals before, immediately after, and months after intervention (this is called follow-up ). However, if an intervention is conducted in a community, a state, or across the nation, the data might not be collected from the same individual persons at each point in time—the unit of analysis is what matters here. For example, if the longitudinal study’s unit of analysis is the 50 states, District of Columbia, and 5 inhabited territories of the United States, data are repeatedly collected at that level (states, DC, and territories), perhaps not from the same individual persons in each of those communities.

an oragne cut in two different ways to illustrate different cross sections

Formative, Process, and Outcome Evaluation

Practice and program evaluation are important aspects of social work practice. It would be nice if we could simply rely on our own sense of what works and what does not. However, social workers are only human and, as we learned in our earlier course, human memory and decisions are vulnerable to bias. Sources of bias include recency, confirmation, and social desirability biases.

  • Recency bias occurs when we place higher emphasis on what has just happened (recently) than on what might have happened in the more distant past. In other words, a social worker might make a casual practice evaluation based on one or two exceptionally good or exceptionally bad recent outcomes rather than a longer, larger history of outcomes and systematic evidence.
  • Confirmation bias occurs when we focus on outcomes that reinforce what we believed, feared, or hoped would happen and de-emphasize alternative events or interpretations that might contradict those beliefs, fears, or hopes.
  • Social desirability bias by practitioners occurs when practice decisions are influenced by a desire to be viewed favorably by others—that could be clients, colleagues, supervisors, or others. In other words, a practice decision might be based on “popular” rather than “best” practices, and casual evaluation of those practices might be skewed to create a favorable impression.

In all three of these forms of bias, the problem is not necessarily intentional, but does result in a lack of sufficient attention to evidence in monitoring one’s practices. For example, relying solely on qualitative comments volunteered by consumers (anecdotal evidence) is subject to a selection bias —individuals with strong opinions or a desire to support the social workers who helped them are more likely to volunteer than the general population of those served.

Thus, it is incumbent on social work professionals to engage in practice evaluation that is as free of bias as possible. The choice of systematic evaluation approach is dictated by the evaluation research question being asked. According to the Centers for Disease Control and Prevention (CDC), there are four most common types of intervention or program evaluation: formative, process, outcome, and impact evaluation ( https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf ). Here, we consider these as three types, combining impact and outcome evaluation into a single category, and we consider an additional category, as well: cost evaluation.

Formative Evaluation.

Formative evaluation is emphasized during the early stages of developing or implementing a social work intervention, as well as following process or outcome evaluation as changes to a program or intervention strategy are considered. The aim of formative evaluation is to understand the context of an intervention, define the intervention, and evaluate feasibility of adopting a proposed intervention or change in the intervention (Trochim & Donnelly, 2007). For example, a needs assessment might be conducted to determine whether the intervention or program is needed, calculate how large the unmet need is, and/or specify where/for whom the unmet need exists. Needs assessment might also include conducting an inventory of services that exist to meet the identified need and where/why a gap exists (Engel & Schutt, 2013). Formative evaluation is used to help shape an intervention, program, or policy.

Formative evaluation process sequence

Process Evaluation.

Investigating how an intervention is delivered or a program operates is the purpose behind process evaluation (Engel & Schutt, 2013). The concept of intervention fidelity was previously introduced. Fidelity is a major point of process evaluation but is not the only point. We know that the greater the degree of fidelity in delivery of an intervention, the more applicable the previous evidence about that intervention becomes in reliably predicting intervention outcomes. As fidelity in the intervention’s delivery drifts or wanes, previous evidence becomes less reliable and less useful in making practice decisions. Addressing this important issue is why many interventions with an evidence base supporting their adoption are manualized , providing detailed manuals for how to implement the intervention with fidelity and integrity. For example, the Parent-Child Interaction Therapy for Traumatized Children (PCIT-TC) treatment protocol is manualized and training certification is available for practitioners to learn the evidence-based skills involved ( https://pcit.ucdavis.edu/ ). This strategy increases practitioners’ adherence to the protocol.

Process evaluation, sometimes called implementation evaluation and sometimes referred to as program monitoring, helps investigators determine the extent to which fidelity has been preserved. But, process evaluation serves other purposes, as well. For example, according to King, Morris and Fitz-Gibbon (1987), process evaluation helps:

  • document details about the intervention that might help explain outcome evaluation results,
  • keep programs accountable (delivering what they claim to deliver),
  • inform planned modifications and changes to the intervention based on evidence.

Process evaluation also helps investigators determine where the facilitators and barriers to implementing an intervention might operate and can help interpret outcomes/results from the intervention, as well. Process evaluation efforts addresses the following:

  • Who delivered the intervention
  • Who received the intervention
  • What was (or was not) done during the intervention
  • When intervention activities occurred
  • Where intervention activities occurred
  • How the intervention was delivered
  • What facilitated implementation with fidelity/integrity
  • What presented as barriers to implementation with fidelity/integrity

For these reasons, many authors consider process evaluation to be a type of formative evaluation.

Process evaluation sequence

Outcome and Impact Evaluation.

The aim of outcome or impact evaluation is to determine effects of the intervention. Many authors refer to this as a type of summative evaluation , distinguishing it from formative evaluation: its purpose is to understand the effects of an intervention once it has been delivered. The effects of interest usually include the extent to which intervention goals or objectives were achieved. An important factor to evaluate concerns positive and negative “side effects”—those unintended outcomes associated with the intervention. These might include unintended impact of the intervention participants or impacts on significant others, those delivering the intervention, the program/agency/institutions involved, and others. While impact evaluation, as described by the CDC, is about policy and funding decisions and longer-term changes, we can include it as a form of outcome evaluation since the questions answered are about achieving intervention objectives. Outcome evaluation is based on the elements presented in the logic model created at the outset of intervention planning.

Process evaluation sequence including early planning intervention planning and conclusion processes

Cost-Related Evaluation.

Social workers are frequently faced with efficiency questions related to the interventions we deliver—thus, cost-related evaluation is part of our professional accountability responsibilities. For example, once an agency has applied the evidence-based practice (EBP) process to select the best-fitting program options for addressing an identified practice concern, program planning is enhanced by information concerning which of the options is most cost-effective.  Here are some types of questions addressed in cost-related evaluation.

cost analysis: How much does it cost to deliver/implement the intervention with fidelity and integrity? This type of analysis typically analyzes monetary costs, converting inputs into their financial impact (e.g., space resources would be converted into cost per square foot, staffing costs would include salary, training, and benefits costs, materials and technology costs might include depreciation).

  • cost-benefit: What are the inputs and outputs associated with the intervention? This type of analysis involves placing a monetary value on each element of input (resources) and each of the outputs. For example, preventing incarceration would be converted to the dollars saved on jail/prison costs; and, perhaps, including the individuals’ ability to keep their jobs and homes which could be lost with incarceration, as well as preventing family members needing public assistance and/or children being placed in foster care if their family member is incarcerated.
  • cost-effectiveness: What is the ratio of cost units (numerator) to outcome units (denominator) associated with delivering an intervention. Outcomes are tied to the intervention goals rather than monetary units. For example, medical interventions are often analyzed in terms of DALYs (disability-adjusted life years)—units designed to indicate “disease burden,” calculated to represent the number of years lost to illness, disability, or premature death (morbidity and mortality). Outcomes might also be numbers of “cases,” such as deaths or hospitalizations related to suicide attempts, drug overdose events, students dropping out from high school, children reunited with their families (family reunification), reports of child maltreatment, persons un- or under-employed, and many more examples. Costs are typically presented as monetary units estimated from a costs analysis. (See http://www.who.int/heli/economics/costeffanalysis/en/ ).
  • cost-utility: A comparison of cost-effectiveness for two or more intervention options, designed to help decision-makers make informed choices between the options.

Two of the greatest challenges with these kinds of evaluation are (1) ensuring that all relevant inputs and outputs are included in the analysis, and (2) realistically converting non-monetary costs and benefits into monetary units to standardize comparisons. An additional challenge has to do with budget structures: the gains might be realized in a different budget than where the costs are borne. For example, implementing a mental health or substance misuse treatment program in jails and prisons costs those facilities; the benefits are realized in budgets outside those facilities—schools, workplaces, medical facilities, family services, and mental health programs in the community. Thus, it is challenging to make decisions based on these analyses when constituents are situated in different systems operating with “siloed” budgets where there is little or no sharing across systems.

Example of silod budgets

An Additional Point.

An intervention or evaluation effort does not necessarily need to be limited to one types. As in the case of mixed-methods approaches, it is sometimes helpful to engage in multiple evaluation efforts with a single intervention or program. A team of investigators described how they used formative, process, and outcome evaluation all in the pursuit of understanding a single preventive public health intervention called VERB, designed to increase physical activity among youth (Berkowitz et al., 2008). Their formative evaluation efforts allowed the team to assess the intervention’s appropriateness for the target audience and to test different messages. The process evaluation addressed fidelity of the intervention during implementation. And, the outcome evaluation led the team to draw conclusions concerning the intervention’s effects on the target audience. The various forms of evaluation utilized qualitative and quantitative approaches.

Participatory Research Approaches

One contrasts previously noted between qualitative and quantitative research is the nature of the investigator’s role. Every effort is made to minimize investigator influence on the data collection and analysis processes in quantitative research. Qualitative research, on the other hand, recognizes the investigator as an integral part of the research process. Participatory research fits into this latter category.

“Participant observation is a method in which natural social processes are studied as they happen (in the field, rather than in the laboratory) and left relatively undisturbed. It is a means of seeing the social world as the research subjects see it, in its totality, and of understanding subjects’ interpretations of that world” (Engel & Schutt, 2013, p. 276).

This quote describes naturalistic observation very well. The difference with participatory observation is that the investigator is embedded in the group, neighborhood, community, institution, or other entity under study. Participatory observation is one approach used by anthropologists to understand cultures from an embedded rather than outsider perspective. For example, this is how Jane Goodall learned about chimpanzee culture in Tanzania: she became accepted as part of the group she observed, allowing her to describe the members’ behaviors and social relationships, her own experiences as a member of the group, and the theories she derived from 55 years of this work. In social work, the participant approach may be used to answer the research questions of the type we explored in our earlier course: understanding diverse populations, social work problems, or social phenomena. The investigator might be a natural member of the group, where the role as group member precedes the role as observer. This is where the term indigenous membership applies: naturally belonging to the group. (The term “indigenous people” describes the native, naturally occurring inhabitants of a place or region.) It is sometimes difficult to determine how the indigenous member’s observations and conclusions might be influenced by his or her position within the group—for example, the experience might be different for men and women, members of different ages, or leaders. Thus, the conclusions need to be confirmed by a diverse membership.

Participant observers are sometimes “adopted” members of the group, where the role of observer precedes their role as group member. It is somewhat more difficult to determine if evidence collected under these circumstances reflects a fully accurate description of the members’ experience unless the evidence and conclusions have been cross-checked by the group’s indigenous members. Turning back to our example with Jane Goodall, she was accepted into the chimpanzee troop in many ways, but not in others—she could not experience being a birth mother to members of the group, for example.

Sometimes investigators are more actively engaged in the life of the group being observed. As previously noted, participant observation is about the processes being left relatively undisturbed (Engel & Schutt, 2013, p. 276).  However, participant observers might be more actively engaged in change efforts, documenting the change process from “inside” the group promoting change. These instances are called participatory action research (PAR) , where the investigator is an embedded member of the group, joining them in making a concerted effort to influence change. PAR involves three intersecting roles: participation in the group, engaging with the action process (planning and implementing interventions), and conducting research about the group’s action process (see Figure 2-1, adapted from Chevalier & Buckles, 2013, p. 10).

Figure 2-1. Venn diagram of participatory action research roles.

Venn diagram of participatory action research roles

For example, Pyles (2015) described the experience of engaging in participatory action research with rural organizations and rural disaster survivors in Haiti following the January 12, 2010 earthquake. The PAR aimed to promote local organizations’ capacity to engage in education and advocacy and to secure much-needed resources for their rural communities (Pyles, 2015, p. 630). According to the author, rural Haitian communities have a history of experience with exploitative research where outsiders conduct investigations without the input or participation of community members, and where little or no capacity-building action occurs based on study results and recommendations. Pyles also raised the point that, “there are multiple barriers impeding the participation of marginalized people” in community building efforts, making PAR approaches even more important for these groups (2015, p. 634).

The term community-based participatory research (CBPR) refers to collaborative partnerships between members of a community (e.g., a group, neighborhood, or organization) and researchers throughout the entire research process. CBPR partners (internal and external members) all contribute their expertise to the process, throughout the process, and share in all steps of decision-making. Stakeholder members of the community (or organization) are involved as active, equal partners in the research process, co-learning by all members of the collaboration is emphasized, and it represents a strengths-focused approach (Harris, 2010; Holkup, Tripp-Reier, Salois, & Weinert, 2004). CBPR is relevant in our efforts to understand social work interventions since the process can result in interventions that are culturally appropriate, feasible, acceptable, and applicable for the community since they emerged from within that community. Furthermore, it is a community empowerment approach whereby self-determination plays a key role and the community is left with new skills for self-study, evaluation, and understanding the change process (Harris, 2010). These characteristics of CBPR help define the approach.

(a) recognizing the community as a unit of identity,

(b) building on the strengths and resources of the community,

(c) promoting colearning among research partners,

(d) achieving a balance between research and action that mutually benefits both science and the community,

(e) emphasizing the relevance of community-defined problems,

(f) employing a cyclical and iterative process to develop and maintain community/ research partnerships,

(g) disseminating knowledge gained from the CBPR project to and by all involved partners, and

(h) requiring long-term commitment on the part of all partners ( Holkup, Tripp-Reier, Salois, & Weinert, 2004, p. 2).

Quinn et al (2017) published a case study of CBPR practices being employed with youth at risk of homelessness and exposure to violence. The authors cited a “paucity of evidence-based, developmentally appropriate interventions” to address the mental health needs of youth exposed to violence (p. 3). The CBPR process helped determine the acceptability of a person-centered trauma therapy approach called narrative exposure therapy (NET). The results of three pilot projects combined to inform the design of a randomized controlled trial (RCT) to study the impact of the NET intervention. The three pilot projects engaged researchers and members of the population to be served (youth at risk of homelessness and exposure to violence). The authors of the case study article discussed some of the challenges of working with youth in the CBPR process and research process. Adapted from Quinn et al (2017), these included:

  • Compliance with federal regulations for research involving minors (defined as “children” in the policies). Compounding this challenge was the vulnerable status of the youth due to their homeless status, and the frequency with which many of the youth were not engaged with any adults who had legal authority to provide consent for them to participate.
  • The team was interdisciplinary, which brings many advantages. However, it also presented challenges regarding different perspectives about how to engage in the varied research processes of participant recruitment and retention, measurement, and intervention.
  • Logistics of conducting focus groups with this vulnerable population. Youth encounter difficulties with participating predictably, and for this vulnerable population the practical difficulties are compounded. They experience complex and often competing demands on their schedules, “including school obligations, court, group or other agency appointments, or childcare,” as well as managing public transportation schedules and other barriers (p. 11). Furthermore, members of the group may have pre-existing relationships and social network ties that can impinge on their comfort with openly sharing their experiences or perspectives in the group setting. They may also have skepticism and reservations about sharing with the adults leading the focus group sessions.

Awareness of these challenges can help CBPR teams develop solutions to overcome the barriers. The CBPR process, while time and resource intensive, can result in appropriate intervention designs for under-served populations where existing evidence is not available to guide intervention planning.

A person sleeping on a bench outside

A somewhat different approach engages members of the community as consultants regarding interventions with which they may be engaged, rather than a fully CBPR approach. This adapted consultation approach presents an important option for ensuring that interventions are appropriate and acceptable for serving the community. However, community members are less integrally involved in the action-related aspects of defining and implementing the intervention, or in the conduct of the implementation research. An example of this important community-as-consultant approach involved a series of six focus group sessions conducted with parents, teachers, and school stakeholders discussing teen pregnancy prevention among high-school aged Latino youth (Johnson-Motoyama et al., 2016). The investigating team reported recommendations and requests from these community members concerning the important role played by parents and potential impact of parent education efforts in preventing teen pregnancy within this population. The community members also identified the importance of comprehensive, empowering, tailored programming that addresses self-respect, responsibility, and “realities,” and incorporates peer role models. They concluded that local school communities have an important role to play in planning for interventions that are “responsive to the community’s cultural values, beliefs, and preferences, as well as the school’s capacity and teacher preferences” (p. 513). Thus, the constituencies involved in this project served as consultants rather than CBPR collaborators. However, the resulting intervention plans could be more culturally appropriate and relevant than intervention plans developed by “outsiders” alone.

interconnected hands with overlayed wordcloud about connection and unity

One main limitation to conducting CBPR work is the immense amount of time and effort involved in developing strong working collaborative relationships—relationships that can stand the test of time. Collaborative relationships are often built from a series of “quick wins” or small successes over time, where the partners learn about each other, learn to trust each other, and learn to work together effectively.

Chapter Summary

This chapter began with a review of concepts from our earlier course: qualitative, quantitative, mixed-methods, cross-sectional and longitudinal approaches. Expanded content about approach came next: formative, process, outcome, and cost evaluation approaches were connected to the kinds of intervention questions social workers might ask, and participatory research approaches were introduced. Issues of cultural relevance were explored, as well. This discussion of approach leads to an expanded discussion of quantitative study design strategies, which is the topic of our next chapter.

Stop and Think

Stop and Think

Social Work 3402 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Multi-Tiered System of Supports Build effective, district-wide MTSS
  • School Climate & Culture Create a safe, supportive learning environment
  • Positive Behavior Interventions & Supports Promote positive behavior and climate
  • Family Engagement Engage families as partners in education
  • Platform Holistic data and student support tools
  • Integrations Daily syncs with district data systems and assessments
  • Professional Development Strategic advising, workshop facilitation, and ongoing support

Mesa OnTime

  • Success Stories
  • Surveys and Toolkits
  • Product Demos
  • Events and Conferences

 alt=

AIM FOR IMPACT

Join us to hear from AI visionaries and education leaders on building future-ready schools.

  • Connecticut
  • Massachusetts
  • Mississippi
  • New Hampshire
  • North Carolina
  • North Dakota
  • Pennsylvania
  • Rhode Island
  • South Carolina
  • South Dakota
  • West Virginia
  • Testimonials
  • About Panorama
  • Data Privacy
  • Leadership Team
  • In the Press
  • Request a Demo

Request a Demo

  • Popular Posts
  • Multi-Tiered System of Supports
  • Family Engagement
  • Social-Emotional Well-Being
  • College and Career Readiness

Show Categories

How to Write an Intervention Plan [+ Template]

Jenna Buckle

Jenna Buckle

How to Write an Intervention Plan [+ Template]

Implementing a multi-tiered system of supports (MTSS) without an intervention planning process is like trying to teach a class without a lesson plan. If you don't know where you're going (or have a plan for getting there), you won't be able to effectively support students.

Intervention plans are typically used as part of student support team processes for MTSS, RTI (response to intervention), or PBIS (positive behavioral interventions and supports). Once a caring adult determines that a student needs targeted support, the next step is to create an intervention plan.

In this post, we'll cover how to write an intervention plan and share a helpful template for getting started.

Table of Contents

What Is an Intervention Plan?

How to write an intervention plan.

  • Identify the Student(s)
  • Choose an Intervention Type and Tier
  • Create a Goal for the Student's Intervention Program
  • Select an Intervention Strategy
  • Assign an Adult Champion
  • Set a Timeline
  • Establish a Method for Progress Monitoring

Put This Into Practice

Without clear intervention plans, it's challenging to provide targeted support to students who need it most. These plans serve as roadmaps for educators, outlining specific strategies and goals to help students succeed academically, behaviorally, and socially.

By creating intervention plans, educators can ensure that students receive personalized support tailored to their individual needs. These plans enable educators to track progress, adjust strategies as necessary, and ultimately improve student outcomes across various domains.

Effective intervention plans are grounded in data, utilizing information about student performance, behavior, and social-emotional well-being to inform decision-making. They include measurable goals that allow educators to track progress objectively and establish clear timelines for intervention implementation and evaluation.

This comprehensive process ensures that intervention efforts are systematic, coordinated, and aligned with the needs of individual students. By following these steps, educators can create intervention plans that are targeted, feasible, and conducive to student success.

Download Now: Panorama's Interventions and Progress Monitoring Toolkit for 2023-24

An intervention plan is a blueprint for helping a student build specific skills or reach a goal. In other words, it's an action plan. 

In general, intervention plans include a goal, intervention strategy , timeline, and progress monitoring method.

What Makes a Good Intervention Plan?

Before you get started building an intervention plan, make sure you have the necessary data! Look at the student's progress across multiple dimensions—academics, social-emotional learning, behavior, and attendance. This can help you make more informed decisions about what the student needs.

Here's a scenario to demonstrate this point:

 

Allie struggles with reading and acts out in reading class. You know this by looking at her academic and behavior data. However, data shows that Allie is also reporting a low sense of self-efficacy—which is how much students believe they can succeed in achieving academic outcomes. Together, this data paints the story that Allie is acting out in reading class in order to avoid having to read.

Instead of prescribing a standard behavior intervention for Allie, you may instead decide on delivering an intervention called “Breaks Are Better”—a modified CICO intervention that helps students take breaks rather than engage in unwanted avoidance behavior.

In addition to being data informed, good intervention plans are measurable and time-bound . You'll want a clear way to measure if the student is progressing, and a plan for how long you'll deliver the intervention.

The goal is to reach a decision point at the end of an intervention plan. Maybe the student has met their goal, and you can close out their intervention plan. Maybe the student is progressing, but the intervention should continue. Or, maybe the current intervention plan isn't working and it's time to rethink the strategies in place.

Once you've determined that a student can benefit from targeted support, it's time to create an intervention plan. This plan will be your blueprint for helping the student build specific skills or reach a goal. You can download the intervention plan template below to follow step-by-step instructions to writing an intervention plan. 

Screenshot of Panorama's intervention plan template

Pro tip for Panorama Users: Panorama Student Success simplifies the process of creating intervention plans. Click on “Create Plan” on a student’s profile page to build a plan for improving the student’s academic performance, behavior, attendance, and/or SEL. You can even generate a secure, temporary link for families to view students’ intervention plans and their progress.

1. Identify the student(s)

Which student will you be supporting? First, record the student's name at the top of the plan. You might also include additional information such as grade level, gender, or other demographic attributes or identifiers used by your school. 

(Keep in mind that you can also create an intervention plan for a small group of students that you're working with. The steps to create a group plan are the same.)

2. Choose an intervention type and tier 

What is the area of focus for the intervention? What subject (or domain) can the student benefit from extra support in? Examples could be English language arts (ELA) , math, behavior, social-emotional learning (SEL), or attendance .

Next, specify Tier 2 or Tier 3 depending on the intensity of the intervention. Here is a refresher on the MTSS pyramid:

MTSS Pyramid (1)

  • Tier 3 includes more intensive interventions for students whose needs are not addressed at Tiers 1 or 2.
  • Tier 2 consists of individualized interventions for students in need of additional support.
  • Tier 1 is the foundation and includes universal supports for all students.

3. Create a goal for the student's intervention program

This is when you'll identify specific skills to be developed, or the goal you are looking to help the student achieve. 

Remember to frame these in the positive (an opportunity to grow) rather than the negative (a problem to solve).  

It can be helpful to use the SMART goal framework —setting a goal that is specific, measurable, attainable, relevant, and timely. 

For example, to build a student's self-efficacy in math, you might set the following goal: “Charles will be able to complete 80% of his do-now activities at the beginning of each math lesson with the support of manipulatives.”

4. Select an intervention strategy

With the intervention goal in mind, identify a strategy or activity that could help this student reach the goal. Sample intervention strategies include 2x10 relationship building , a behavior management plan such as behavior-specific praise , graphic organizers, a lunch bunch , WOOP goal-setting , and math time drills.

Your school district may already have an evidence-based intervention menu to pick from. For example, if your district partners with Panorama, you have access to our Playbook, with over 700 evidence and research-based interventions . In fact, t he Panorama platform recommends interventions from Playbook whenever you create an intervention plan. If you don't have an existing intervention menu, here are a few resources to get help you get started building your own library:

  • How to Build a Tiered Intervention Menu
  • 5 PBIS Interventions for Tier 1 to Use in Your District Today
  • 42 MTSS Intervention Strategies to Bring Back to Your Support Team
  • 6 Effective Interventions for Social-Emotional Learning
  • 18 Research-Based Interventions for Your MTSS 
  • 20 Evidence-Based Interventions for High School Students 

5. Assign an adult champion

Who will carry out the intervention plan with fidelity? A teacher? Interventionist? School counselor? 

Clear ownership is key. Whether it's one adult or a team, make sure to document who will be responsible for delivering the intervention(s), logging notes, and monitoring student progress.

6. Set a timeline

Next, set a clear prescription for how often and how long an intervention will take place. Record a start date (when the intervention is set to begin) and a duration (the expected length of the intervention cycle). We recommend five to six weeks at a minimum so the intervention has a chance to take hold.

7. Establish a method for progress monitoring

You're almost done! The last step in building a great intervention plan is deciding on a data collection strategy. 

Once the intervention plan is underway, it's important to collect and record qualitative and/or quantitative data at regular intervals. Many goals are best tracked quantitatively, such as reading level growth or computational fluency. Other goals (behavioral and SEL goals, for example) might be best tracked qualitatively—like making note of how a student is interacting with peers in class. (Learn more about the fundamentals of progress monitoring for MTSS/RTI.)

Don't forget to include the following information on your intervention plan:

  • Monitoring Frequency: How often you'll update the student’s progress over the course of the intervention cycle. For example, this could be weekly, bi-weekly, or monthly.
  • Monitoring Method: The assessment you'll use to track the student’s progress. Indicate a baseline (the student’s most recent assessment score) and target (desired assessment score). Alternatively, you might plan to track progress through observational notes.

mtss - as a collaboration and progress monitoring tool

Example of a reading intervention plan in Panorama Student Success (mock data pictured)

If you use Panorama for MTSS : When creating an intervention plan, you'll see recommended interventions based on the goals of the plan. Then, log qualitative and quantitative notes to monitor a student's progress over time. The notes are saved to the student profile so other educators in your school can stay up-to-date on the student's progress.

To identify students in need of targeted support, educators can analyze academic, behavioral, social-emotional, and attendance data. Look for patterns or discrepancies indicating areas where students may require additional assistance. Collaborating with colleagues and involving students in the process can also provide valuable insights.

Parents or guardians are essential partners in the intervention planning process. Educators should communicate regularly with families to share progress updates, solicit feedback, and discuss strategies for supporting the student at home. Collaborative efforts between school and home environments enhance the effectiveness of interventions and promote student success.

Choosing the right intervention strategy involves considering the student's specific needs, strengths, and challenges. Educators should assess the student's response to previous interventions, gather input from colleagues and support staff, and consult research-based resources. Tailoring interventions to align with the student's goals and preferences increases the likelihood of success.

Consistent implementation of intervention plans requires clear communication, ongoing monitoring, and shared responsibility among all stakeholders. Educators should establish protocols for documenting progress, provide training and support to staff involved in intervention delivery, and maintain open lines of communication with students, families, and support team members. Regular review meetings and data-driven decision-making processes can help ensure fidelity to the intervention plan.

Put This Into Practice!

Now that you have the building blocks for writing an effective intervention plan, there's only one thing left to do: put it into action. If you're an MTSS leader or coordinator for your district, we hope that you'll share this process (and template!) with your building-level student support teams. If you are an educator working with a specific student, we hope that this process helps you stay organized as you deliver supports.

Access intervention planning resources in our free Interventions and Progress Monitoring Toolkit

Related Articles

Check-In/Check-Out (CICO): Intervention Tips and Guidance

Check-In/Check-Out (CICO): Intervention Tips and Guidance

Check-In/Check-Out (CICO) is an evidence-based Tier 2 behavior intervention. Learn which students it helps and best practices for implementation.

What Is RTI? Guide to RTI in Schools

What Is RTI? Guide to RTI in Schools

RTI, or Response to Intervention, is an educational framework that uses a tiered model to identify and support students with research-based interventions and high-quality instruction. Learn more about RTI in this guide.

What Is a Behavior Intervention Plan? [PDF Template]

What Is a Behavior Intervention Plan? [PDF Template]

A behavior intervention plan is an action plan for helping a student improve certain behaviors or reach a specific behavior goal. Learn how to create one with this free template.

intervention plan in research

Featured Resource

Interventions and progress monitoring toolkit for 2023-24.

Access free templates to build your district or school's MTSS/RTI process. Includes templates for intervention planning, progress monitoring, and a sample intervention menu.

MTSS Intervention Planning & Progress Monitoring Forms - Free Download

MTSS Intervention Planning & Progress Monitoring Forms - Free Download

Download this toolkit to learn how to plan interventions & progress monitor in a Multi-Tiered System of Supports (MTSS) or Response to Intervention (RTI).

Join 90,000+ education leaders on our weekly newsletter.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 9, Issue 8
  • Guidance on how to develop complex interventions to improve health and healthcare
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4033-506X Alicia O'Cathain 1 ,
  • http://orcid.org/0000-0002-3666-6264 Liz Croot 1 ,
  • http://orcid.org/0000-0002-3400-905X Edward Duncan 2 ,
  • Nikki Rousseau 2 ,
  • Katie Sworn 1 ,
  • http://orcid.org/0000-0002-6375-2918 Katrina M Turner 3 ,
  • Lucy Yardley 3 , 4 ,
  • http://orcid.org/0000-0002-4372-9681 Pat Hoddinott 2
  • 1 Medical Care Research Unit, School of Health and Related Research , University of Sheffield , Sheffield , UK
  • 2 Nursing, Midwifery and Allied Health Professional Research Unit , University of Stirling , Stirling , UK
  • 3 School of Social and Community Medicine , University of Bristol , Bristol , UK
  • 4 Psychology , University of Southampton , Southampton , UK
  • Correspondence to Professor Alicia O'Cathain; a.ocathain{at}sheffield.ac.uk

Objective To provide researchers with guidance on actions to take during intervention development.

Summary of key points Based on a consensus exercise informed by reviews and qualitative interviews, we present key principles and actions for consideration when developing interventions to improve health. These include seeing intervention development as a dynamic iterative process, involving stakeholders, reviewing published research evidence, drawing on existing theories, articulating programme theory, undertaking primary data collection, understanding context, paying attention to future implementation in the real world and designing and refining an intervention using iterative cycles of development with stakeholder input throughout.

Conclusion Researchers should consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.

  • intervention development

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjopen-2019-029954

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

There is increasing demand for new interventions as policymakers and clinicians grapple with complex challenges, such as integration of health and social care, risk associated with lifestyle behaviours, multimorbidity and the use of e-health technology. Complex interventions are often required to address these challenges. Complex interventions can have a number of interacting components, require new behaviours by those delivering or receiving the intervention or have a variety of outcomes. 1 An example is a multicomponent intervention to help people stand more at work, including a height adjustable workstation, posters and coaching sessions. 2 Careful development of complex interventions is necessary so that new interventions have a better chance of being effective when evaluated and being adopted widely in the real world. Researchers, the public, patients, industry, charities, care providers including clinicians and policymakers can all be involved in the development of new interventions to improve health, and all have an interest in how best to do this.

The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation. 1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase. 3 This phase was only briefly outlined in the original MRC guidance and requires extension to offer more help to researchers wanting to develop complex interventions. Bleijenberg and colleagues 4 brought together learning from a range of guides/published approaches to intervention development to enrich the MRC framework. 4 There are also multiple sources of guidance to intervention development, embodied in books and journal articles about different approaches to intervention development (for example 5 ) and overviews of the different approaches. 6 These approaches may offer conflicting advice, and it is timely to gain consensus on key aspects of intervention development to help researchers to focus on this endeavour. Here, we present guidance on intervention development based on a consensus study which we describe below. We present this guidance as an accessible communication article on how to do intervention development, which is aimed at readers who are developers, including those new to the endeavour. We do not present it as a ‘research article’ with methods and findings to maximise its use as guidance. Lengthy detail and a long list of references are not provided so that the guidance is focused and user friendly. In addition, the key actions of intervention development are summarised in a single table so that funding panel members and developers can use this as a quick reference point of issues to consider when developing health interventions.

How this guidance was developed

This guidance is based on a study funded by the MRC and the National Institute for Health Research in the UK, with triangulation of evidence from three sources. First, we undertook a review of published approaches to intervention development that offer developers guidance on specific ways to develop interventions 6 and a review of primary research reporting intervention development. The next two phases involved developers and wider stakeholders. Developers were people who had written articles or books detailing different approaches to developing interventions and people who had developed interventions. Wider stakeholders were people involved in the wider intervention development endeavour in terms of being directors of research funding panels, editors of journals that had published intervention development studies, people who had been public and patient involvement members of studies involving intervention development and people working in health service implementation. We carried out qualitative interviews 7 and then we conducted a consensus exercise consisting of two simultaneous and identical e-Delphi studies distributed to intervention developers and wider stakeholders, respectively, and followed this with a consensus workshop. We generated items for the e-Delphi studies based on our earlier reviews and analysis of interview data and asked participants to rate 85 items on a five-point scale from ‘very’ to ‘not important’ using the question ‘when developing complex interventions to improve health, how important is it to’. The distribution of answers to each item is displayed in Appendix 1, and e-Delphi participants are described in Appendix 2. In addition to these research methods, we convened an international expert panel with members from the UK, USA and Europe early in the project to guide the research. Members of this expert panel participated in the e-Delphi studies and consensus workshop alongside other participants.

Framework for intervention development

We base this guidance on expert opinion because there is a research evidence gap about which actions are needed in intervention development to produce successful health interventions. Systematic reviews have been undertaken to determine whether following a specific published approach, or undertaking a specific action, results in effective interventions. Unfortunately, this evidence base is sparse in the field of health, largely due to the difficulty of empirically addressing this question. 8 9 Evidence tends to focus on the use of existing theory within intervention development—for example, the theory of Diffusion of Innovation or theories on behaviour change—and a review of reviews shows that interventions developed with existing theory do not result in more effective interventions than those not using existing theory. 10 The authors of this latter review highlight problems with the evidence base rather than dismiss the possibility that existing theory could help produce successful interventions.

Key principles and actions of intervention development are summarised below. More detailed guidance for the principles and actions is available at https://www.sheffield.ac.uk/scharr/sections/hsr/mcru/indexstudy .

Key principles of intervention development

Key principles of intervention development are that it is dynamic, iterative, creative, open to change and forward looking to future evaluation and implementation. Developers are likely to move backwards and forwards dynamically between overlapping actions within intervention development, such as reviewing evidence, drawing on existing theory and working with stakeholders. There will also be iterative cycles of developing a version of the intervention: getting feedback from stakeholders to identify problems, implementing potential solutions, assessing their acceptability and starting the cycle again until assessment of later iterations of the intervention produces few changes. These cycles will involve using quantitative and qualitative research methods to measure processes and intermediate outcomes, and assess the acceptability, feasibility, desirability and potential unintended harms of the intervention.

Developers may start the intervention development with strong beliefs about the need for the intervention, its content or format or how it should be delivered. They may also believe that it is possible to develop an intervention with a good chance of being effective or that it can only do good not harm. Being open to alternative possibilities throughout the development process may lead to abandoning the endeavour or taking steps back as well as forward. The rationale for being open to change is that this may reduce the possibility of developing an intervention that fails during future evaluation or is never implemented in practice. Developers may also benefit from looking forward to how the intervention will be evaluated so they can make plans for this and identify learning and key uncertainties to be addressed in future evaluation.

Key actions of intervention development

Key actions for developers to consider are summarised in table 1 and explored in more detail throughout the rest of the paper. It may not be possible or desirable for developers to address all these actions during their development process, and indeed some may not be relevant to every problem or context. The recommendation made here is that developers ‘consider the relevance and importance of these actions to their situation both at the start of, and throughout, the development process’.

  • View inline

Framework of actions for intervention development

These key actions are set out in table 1 in what appears to be a sequence. However, in practice, these actions are addressed in a dynamic way. That is, undertaken in parallel and revisited regularly as the intervention evolves, or they interact with each other when learning from one action influences plans for other actions. These actions are explored in more detail below and presented in a logic model for intervention development ( figure 1 ). A logic model is a diagram of how an intervention is proposed to work, showing mechanisms by which an intervention influences the proposed outcomes. 11 The short and long-term effects of successful intervention development were informed by the qualitative interviews with developers and wider stakeholders. 7

  • Download figure
  • Open in new tab
  • Download powerpoint

Logic model for intervention development.

Plan the development process

Understand the problem.

Developers usually start with a problem they want to solve. They may also have some initial ideas about the content, format or delivery of the proposed intervention. The knowledge about the problem and the possibilities for an intervention may be based on: personal experiences of the problem (patients, carers or members of the public); their work (practitioners, policymakers, researchers); published research or theory or discussions with stakeholders. These early ideas about the intervention may be refined and indeed challenged throughout the intervention development process. For example, understanding the problem, priorities for addressing it and the aspects that are amenable to change is part of the development process, and different solutions may emerge as understanding increases. In addition, developers may find that it is not necessary to develop a new intervention because effective or cost-effective ones already exist. It may not be worth developing a new intervention because the potential cost is likely to outweigh the potential benefits or its limited reach could increase health inequalities, or the current context may not be conducive to using it. Health economists may contribute to this debate.

Identify resources—time and funding

Once a decision has been made that a new intervention is necessary, and has the potential to be worthwhile, developers can consider the resources available to them. Spending too little time developing an intervention may result in a flawed intervention that is later found not to be effective or cost-effective or is not implemented in practice, resulting in research waste. Alternatively, spending too much time on development could also waste resources by leaving developers with an outdated intervention that is no longer acceptable or feasible to deliver because the context has changed so much or is no longer a priority. It is likely that a highly complex problem with a history of failed interventions will warrant more time for careful development.

Some funding bodies fund standalone intervention development studies or fund this endeavour as part of a programme of development, piloting and evaluation of an intervention. While pursuing such funding may be desirable to ensure sufficient resource, in practice some developers may not be able to access this funding and may have to fund different parts of the development process from separate pots of money over a number of years.

Applying for funding requires writing a protocol for a study. Funders need detail about the proposed intervention and the development process to make a funding decision. It may feel difficult to specify the intervention and the detail of its development before starting because these will depend on learning occurring throughout the development process. Developers can address this by describing in detail their best guess of the intervention and their planned development process, recognising that both are likely to change in practice. Even if funding is not sought, it may be a good idea to produce a protocol detailing the processes to be undertaken to develop the intervention so that sufficient resources can be identified.

Decide which approach to intervention development to take

A key decision for teams is whether to be guided by one of the many published approaches to intervention development or undertake a more pragmatic self-selected set of actions. A published approach is a guide to the process and methods of intervention development set out in a book, website or journal article. The rationale for using a published approach is that it sets out systematic processes that other developers have found useful. Some published approaches and approaches that developers have used in practice are listed in table 2 . 6 No research has shown that one of these approaches is better than another or that their use always leads to the development of successful interventions. In practice, developers may select a specific published approach because of the purpose of their intervention development, for example, aiming to change behaviour might lead to the use of the Behaviour Change Wheel or Intervention Mapping, in conjunction with the Person Based Approach. Alternatively, selection may depend on developers’ beliefs or values, for example, partnership approaches such as coproduction may be selected because developers believe that users will find the resultant interventions more acceptable and feasible, or they may value inclusive work practices in their own right. Although developers may follow a published approach closely, experts recommend that developers apply these approaches flexibly to fit their specific context. Many of these approaches share the same actions 4 6 and simply place more emphasis on one or a subset of actions. Researchers sometimes combine the use of different approaches in practice to gain the strengths of two approaches, as in the ‘Combination’ category of table 2 .

Different approaches to intervention development

Involve stakeholders throughout the development process

Many groups of people are likely to have a stake in the proposed intervention: the intervention may be aimed at patients or the public, or they may be expected to use the intervention; practitioners may deliver the intervention in a range of settings, for example, hospitals, primary care, community care, social care, schools, communities, voluntary/third sector organisations and users, policy makers or tax payers may pay for the intervention. The rationale for involving relevant stakeholders from the start, and indeed working closely with them throughout, is that they can help to identify priorities, understand the problem and help find solutions that may make a difference to future implementation in the real world.

There are many ways of working with stakeholders and different ways may be relevant for different stakeholders at different times during the development process. Consultation may sometimes be appropriate, where a one-off meeting with a set of stakeholders helps developers to understand the context of the problem or the context in which the intervention would operate. Alternatively, the intervention may be designed closely with stakeholders using a coproduction process, where stakeholders and developers generate ideas about potential interventions and make decisions together throughout the development process about its content, format, style and delivery. 12 This could involve a series of workshops and meetings to build relationships over time to facilitate understanding of the problem and generation of ideas for the new intervention. Coproduction rather than consultation is likely to be important when buy-in is needed from a set of stakeholders to facilitate the feasibility, acceptability and engagement with the intervention or the health problem or context is particularly complex. Coproduction involves stakeholders in this decision-making, whereas with consultation, decisions are made by the research team. Stakeholders’ views may also be obtained through qualitative interviews, surveys and stakeholder workshops, with methods tailored to the needs of each stakeholder. Innovative activities can be used to help engage stakeholders, for example: creative sessions facilitated by a design specialist might involve imagining what versions of the new intervention might look like if designed by various well-known global manufacturers or creating a patient persona to help people think through the experiences of receiving an intervention. As well as participating in developing the intervention, stakeholders can help to shape the intervention development process itself. Members of the public, patients and service users are key stakeholders, and experts recommend planning to integrate their involvement into the intervention development process from the start.

Bring together a team and establish decision-making processes

Developers may choose to work within any size of team. Small teams can reach out to stakeholders at different points in the development process. Alternatively, large teams may include all the necessary expertise. Experts recommend including: experts in the problem to be addressed by the intervention; individuals with a strong track record in developing complex interventions; a behaviour change scientist when the intervention aims to change behaviour and people who are skilled at maximising engagement of stakeholders. Other possible team members include experts in evaluation methods and economics. Within a coproduction approach to development, key stakeholders participate as equal partners with researchers. Large teams can generate ideas and ensure all the relevant skills are available but may also increase the risk of conflicting views and difficulties when making decisions about the final intervention. There is no consensus on the size of team to have, but experts think it is important to agree a process for making decisions. In particular, experts recommend that team members understand their roles, rights and responsibilities; document the reasons for decisions made and are prepared to test different options where there are team disagreements.

Review published research evidence

Reviewing published research evidence before starting to develop an intervention can help to define the health problem and its determinants, understand the context in which the problem exists, clarify who the intervention should be aimed at, identify whether effective or cost-effective interventions already exist for the target population/setting/problem, identify facilitators and barriers to delivering interventions in this context and identify key uncertainties that need to be addressed using primary data collection. Continuing to review evidence throughout the process can help to address uncertainties that arise, for example, if a new substantive intervention component is proposed then the research evidence about it can be explored. Evidence can change quickly, and keeping up with it by reviewing literature can alert developers to new relevant interventions that have been found to be effective or cost-effective. Developers may be tempted to look for evidence that supports existing ideas and plans, but should also look for, and take into account, evidence that the proposed intervention may not work in the way intended. Undertaking systematic reviews is not always necessary because there may be recent relevant reviews available, nor is it always possible in the context of tight resources available to the development team. However, undertaking some review is important for ensuring that there are no existing interventions that would make the one under development redundant.

Draw on existing theories

Some developers call their approaches to intervention development ‘theory based’ when they draw on psychological, sociological, organisational or implementation theories, or frameworks of theories, to inform their intervention. 6 The rationale for drawing on existing theories is that they can help to identify what is important, relevant and feasible to inform the intended goals of the intervention 13 and inform the content and delivery of any intervention. It may be relevant to draw on more than one existing theory. Experts recommend considering which theories are relevant at the start of the development process. However, the use of theories may need to be kept under scrutiny since in practice some developers have found that their selected theory proved difficult to apply during the development process.

Articulate programme theory

A programme theory describes how a specific intervention is expected to lead to its effects and under what conditions. 14 It shows the causal pathways between the content of the intervention, intermediate outcomes and long-term goals and how these interact with contextual factors. Articulating programme theory at the start of the development process can help to communicate to funding agencies and stakeholders how the intervention will work. Existing theories may inform this programme theory. Logic models can be drawn to communicate different parts of the programme theory such as the causes of a problem, or the mechanisms by which an intervention will achieve outcomes, to both team members and external stakeholders. Figure 1 is an example of a logic model. The programme theory and logic models are not static. They should be tested and refined throughout the development process using primary and secondary data collection and stakeholder input. Indeed, they are advocated for use in process evaluations alongside outcome evaluations in the recent MRC Guidance on process evaluation. 15

Undertake primary data collection

Primary data collection, usually involving mixed methods, can be used for a range of purposes throughout the intervention development process. Reviewing the evidence base may identify key uncertainties that primary data collection can then address. Non-participant observation can be used to understand the setting in which the intervention will be used. Qualitative interviews with the target population or patient group can identify what matters most to people, their lived experience or why people behave as they do. ‘Verbal protocol’, which involves users of an intervention talking aloud about it as they use it, 16 can be undertaken to understand the usability of early versions of the intervention. Pretest and post-test measures may be taken of intermediate outcomes to begin early testing of some aspects of the programme theory, an activity that will continue into the feasibility and evaluation phases of the MRC framework and may lead to changes to the programme theory. Surveys, discrete choice experiments or qualitative interviews can be used to assess the acceptability, values and priorities of those delivering and receiving the intervention.

Understand the context

Recent guidance on context in population health intervention research identifies a breadth of features including those relating to population and individuals; physical location or geographical setting; social, economic, cultural and political influences and factors affecting implementation, for example, organisation, funding and policy. 17 An important context is the specific setting in which the intervention will used, for example, within a busy emergency department or within people’s homes. The rationale for understanding this context, and developing interventions which can operate within it, is to avoid developing interventions that fail during later evaluation because too few people deliver or use them. Context also includes the wider complex health and social care, societal or political systems within which any intervention will operate. 18 Different approaches can be taken to understand context, including reviews of evidence, stakeholder engagement and primary data collection. A challenge of understanding context is that it may change rapidly over the course of the development process.

Pay attention to future implementation of the intervention in the real world

The end goal of developers or those who fund development is real-world implementation rather than simply the development of an intervention that is shown to be effective or cost-effective in a future evaluation. 7 Many interventions do not lead to change in policy or practice, and it is important that effective interventions inform policy and are eventually used in the real world to improve health and care. To achieve this goal, developers may pay attention early on in the development process to factors that might affect use of the intervention, ‘scale up’ of the intervention for use nationally or internationally, and sustainability. For example, consideration of the cost of the intervention at an early stage, including as stakeholders official bodies or policymakers that would endorse or accredit the intervention or addressing the challenges of training practitioners in delivering the intervention may help its future implementation. Implementation-based approaches to intervention development are listed in table 2 . Some other approaches listed in this table, such as the Normalisation Process Theory, also emphasise implementation in the real world.

Design and refine the intervention

The term ‘design’ is sometimes used interchangeably with the term ‘development’. However, it is useful to see design as a specific creative part of the development process where ideas are generated, and decisions are made about the intervention components and how it will be delivered, by whom and where. Design starts with generation of ideas about the content, format, style and delivery of the proposed intervention. The process of design may use creative ways of generating ideas, for example, using games or physically making rough prototypes. Some teams include experts in design or use designers external to the team when undertaking this action. The rationale for a wide-ranging and creative design process is to identify innovative and workable ideas that may not otherwise have been considered.

After generating ideas, a mock up or prototype of the intervention or a key component may be created to allow stakeholders to offer views on it. Once an early version or prototype of the intervention is available, it can be refined (sometimes called optimised) using a series of rapid iterations where each iteration includes an assessment of how acceptable, feasible and engaging the intervention is, leading to cycles of refinements. The programme theory and logic models are important at this point, and developers may test whether some of their proposed mechanisms of action are impacting on intermediate outcomes if statistical power allows. The rationale for spending time on multiple iterations is that problems can be identified and solutions found prior to any expensive future feasibility or evaluation phase. Some experts take a quantitative approach to optimisation of an intervention, specifically the Multiphase Optimization Strategy in table 2 , but not all experts agree that this is necessary.

End the development phase

Seeing this endeavour as a discrete ‘intervention development phase’ that comes to an end may feel artificial. In practice, there is overlap between some actions taken in the development phase and the feasibility phase of the MRC framework, 1 such as consideration of acceptability and some measurement of change in intermediate outcomes. Developers may return to the intervention development phase if findings from the feasibility phase identify significant problems with the intervention. In many ways, development never stops because developers will continue to learn about the intervention, and refine it, during the later pilot/feasibility, evaluation and implementation phases. The intention may be that some types of intervention continuously evolve during evaluation and implementation, which may reduce the amount of time spent on the development phase. However, developers need to decide when to stop that first intensive development phase, either in terms of abandoning the intervention because pursuing it is likely to be futile or moving on to the next phase of feasibility/piloting testing or full evaluation. They also face the challenge of convincing potential funders of an evaluation that enough development has occurred to risk spending resources on its pilot or evaluation. The decision to end the development phase may be partly informed by practicalities, such as the amount of time and money available, and partly by the concept of data saturation (used in qualitative research) in that the intensive process stops when few refinements are suggested by those delivering or using the intervention during its period of refinement, or these and other stakeholders indicate that the intervention feels appropriate to them.

At the end of the development process, policymakers, developers or service providers external to the original team may want to implement or evaluate the intervention. Describing the intervention, using one of the relevant reporting guidelines such as the Template for Intervention Description and Replication Checklist 19 and producing a manual or document that describes the training as well as content of the intervention can facilitate this. This information can be made available on a website, and, for some digital interventions, the intervention itself can be made available. It is helpful to publish the intervention development process because it allows others to make links in the future between intervention development processes and the subsequent success of interventions and learn about intervention development endeavours. Publishing failed attempts to develop an intervention, as well as those that produce an intervention, may help to reduce research waste. Reporting multiple, iterative and interacting processes in these articles is challenging, particularly in the context of limited word count for some journals. It may be necessary to publish more than one paper to describe the development if multiple lessons have been learnt for future development studies.

Conclusions

This guidance on intervention development presents a set of principles and actions for future developers to consider throughout the development process. There is insufficient research evidence to recommend that a particular published approach or set of actions is essential to produce a successful intervention. Some aspects of the guidance may not be relevant to some interventions or contexts, and not all developers are fortunate enough to have a large amount of resource available to them, so a flexible approach to using the guidance is required. The best way to use the guidance is to consider each action by addressing its relevance to a specific intervention in a specific context, both at the start and throughout the development process.

Supplemental material

Acknowledgments.

This guidance is based on secondary and primary research. Many thanks to participants in the e-Delphis, consensus conference and qualitative interviews, to members of our Expert Panel and to people who attended workshops discussing this guidance. The researchers leading the update of the MRC guidance on developing and evaluating interventions, due to be published later this year, also offered insightful comments on our guidance to facilitate fit between the two sets of guidance.

  • Macintyre S , et al
  • Edwardson CL ,
  • Biddle SJH , et al
  • Hoddinott P
  • Bleijenberg N ,
  • de Man-van Ginkel JM ,
  • Trappenburg JCA , et al
  • Bartholomew Eldredge LK ,
  • Parcel GS ,
  • Kok G , et al
  • O’Cathain A ,
  • Sworn K , et al
  • Turner KM ,
  • Rousseau N ,
  • Croot L , et al
  • Harris R , et al
  • Dalgetty R ,
  • Miller CB ,
  • Dombrowski SU
  • W K Kellogg Foundation
  • Davidoff F ,
  • Dixon-Woods M ,
  • Leviton L , et al
  • Barker M , et al
  • Fonteyn ME ,
  • Kuipers B ,
  • Di Ruggiero E ,
  • Frohlich K , et al
  • Hawkins J , et al
  • Hoffmann TC ,
  • Glasziou PP ,
  • Boutron I , et al

Contributors AOC and PH led the development of the guidance, wrote the first draft of the article and the full guidance document which it describes, and integrated contributions from the author group into subsequent drafts. All authors contributed to the design and content of the guidance and subsequent drafts of the paper (AOC, PH, LY, LC, NR, KMT, ED, KS). The guidance is based on reviews and primary research. AOC led the review of different approaches to intervention development working with KS. LC led the review of primary research working with KS. PH led the qualitative interview study working with NR, KMT and ED. ED led the consensus exercise working with NR. AOC acts as guarantor.

Funding MRC-NIHR Methodology Research Panel (MR/N015339/1). Funders had no influence on the guidance presented here. The authors were fully independent of the funders.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

studyingHQ

How to Write a Proposed Intervention Research Paper

Avatar of rachel r. N.

Writing a proposed intervention research paper is a critical step in the research process. It outlines your plan for conducting a study that aims to address a specific problem or issue through a carefully designed intervention.

This guide will walk you through the essential components of a proposed intervention research paper, providing detailed explanations and examples to help you craft a comprehensive and compelling proposal.

What You'll Learn

Introduction

The introduction section sets the stage for your research paper and captures the reader’s attention. It should provide a clear overview of the topic you’ll be exploring and the significance of your proposed intervention.

Background Information

Start by providing relevant background information on the problem or issue you’ll be addressing. Explain the scope and magnitude of the problem, and how it affects the target population. Use statistics, real-life examples, or case studies to illustrate the significance of the problem and the need for intervention.

Example: “Childhood obesity is a growing public health concern, with approximately 19% of children aged 2-19 years in the United States being obese (Centers for Disease Control and Prevention, 2022). Obesity in children can lead to various physical and mental health issues, including type 2 diabetes, cardiovascular problems, and low self-esteem.”

Purpose Statement

Clearly state the purpose of your proposed intervention research. This should be a concise statement that outlines the main goal of your study and the specific problem or issue you aim to address.

Example: “The purpose of this proposed intervention is to design and evaluate a school-based nutrition education program aimed at promoting healthy eating habits and reducing the prevalence of childhood obesity among elementary school students.”

Research Questions or Hypotheses

Depending on the nature of your research, you’ll either present research questions or hypotheses. Research questions are open-ended inquiries that guide your study, while hypotheses are testable statements that predict the outcome of your research.

Example (Research Questions):

  • Does the proposed nutrition education program effectively improve students’ knowledge about healthy eating?
  • To what extent does the intervention influence students’ dietary choices and behaviors?
  • How do different factors (e.g., gender, socioeconomic status) affect the effectiveness of the intervention?

Example (Hypotheses): H1: Students who participate in the nutrition education program will demonstrate a significant increase in their knowledge of healthy eating compared to those in the control group. H2: Students who participate in the nutrition education program will report higher consumption of fruits and vegetables and lower consumption of unhealthy snacks compared to those in the control group.

Literature Review

The literature review section demonstrates your understanding of the existing body of knowledge related to your topic. It should be comprehensive and critically analyze relevant studies, theories, and models.

Theoretical Framework

Discuss the theoretical framework that underpins your proposed intervention. Explain the key concepts, theories, and models that inform your approach, and how they contribute to the development of your intervention strategy.

Example: “The proposed intervention is grounded in the Social Cognitive Theory (Bandura, 1986), which emphasizes the reciprocal interaction between personal factors, behavioral patterns, and environmental influences. This theory suggests that providing students with knowledge and skills, as well as creating a supportive environment, can positively influence their dietary behaviors.”

Previous Research

Provide an overview of previous research related to your topic. Highlight the strengths and limitations of existing studies, and explain how your proposed intervention addresses gaps or builds upon existing knowledge. Critically analyze the methodologies, findings, and implications of relevant studies, and discuss how they inform your proposed intervention.

Example: “Several school-based nutrition education programs have been implemented and evaluated, with mixed results. Smith et al. (2018) found that their program effectively increased students’ knowledge about healthy eating but did not significantly impact their dietary behaviors. On the other hand, Johnson and colleagues (2020) reported positive changes in both knowledge and behaviors among students who participated in their intervention. However, these studies focused on middle school students, and there is a need for interventions targeting younger age groups.”

Methodology

The methodology section outlines the specific steps you’ll take to conduct your research. It should be detailed and clear, allowing others to replicate your study if necessary.

Research Design

Describe the research design you’ll be using, such as experimental, quasi-experimental, or qualitative. Explain why this design is appropriate for your study and how it aligns with your research questions or hypotheses.

Example: “This study will employ a quasi-experimental design with a non-equivalent control group. Two elementary schools will be selected, with one school receiving the nutrition education program (intervention group) and the other serving as the control group. This design allows for the comparison of outcomes between the two groups while accounting for potential confounding variables.”

Participants and Sampling

Describe the target population for your intervention and how you’ll select participants. Explain your sampling methods (e.g., random sampling, convenience sampling) and criteria for inclusion or exclusion. Provide details on the sample size calculation and justify the chosen sample size based on statistical power considerations or previous similar studies.

Example: “The target population for this study is elementary school students in grades 3-5 (ages 8-11) in the city of [X]. A convenience sampling method will be used to select two elementary schools based on their willingness to participate and their demographic similarities. All students in the selected grades at the intervention school will be invited to participate, while students in the same grades at the control school will serve as the comparison group. Based on a power analysis, a minimum sample size of 200 students (100 per group) is required to detect a medium effect size with a power of 0.8 and an alpha level of 0.05.”

Intervention Description

Provide a detailed description of the proposed intervention. Explain the specific components, activities, and procedures involved. Use examples or visual aids (e.g., diagrams, flowcharts) to clarify the intervention, if necessary. Discuss the theoretical underpinnings and evidence-based strategies that inform the design of your intervention.

Example: “The proposed nutrition education program will consist of eight weekly sessions, each lasting 60 minutes. The sessions will be conducted during regular school hours and will be led by trained nutritionists and health educators. The program will incorporate interactive activities, multimedia presentations, hands-on cooking demonstrations, and taste tests to engage students and reinforce learning.

The sessions will cover topics such as the importance of a balanced diet, reading nutrition labels, portion control, identifying healthy snack options, and strategies for making healthy food choices. Each session will include a combination of educational content, group discussions, and practical activities to reinforce the concepts learned.

The program will also involve parental engagement through newsletters, online resources, and a family cooking workshop to encourage healthy eating habits at home. Additionally, the school cafeteria will collaborate with the program by offering healthier menu options and promoting the consumption of fruits and vegetables during lunchtime.”

Data Collection

Outline the methods you’ll use to collect data, such as surveys, interviews, observations, or standardized assessments. Describe the instruments or tools you’ll use and their reliability and validity. Explain how the data collection methods align with your research questions or hypotheses.

Example: “Data will be collected using a combination of quantitative and qualitative methods . Students’ knowledge about healthy eating will be assessed using a validated nutrition knowledge questionnaire (Jones et al., 2015) administered before and after the intervention. Dietary intake and behaviors will be measured through a self-reported food frequency questionnaire (FFQ) and a 24-hour dietary recall interview conducted at baseline and post-intervention.

Additionally, focus group discussions with a subset of students from the intervention group will be conducted to gather qualitative data on their experiences, perceptions, and feedback regarding the nutrition education program. Observational data on students’ lunchtime eating behaviors and food choices will also be collected by trained researchers.”

Data Analysis

Explain how you’ll analyze the data collected from your study. Specify the statistical tests or analytical methods you’ll use and how they align with your research questions or hypotheses. Discuss any potential limitations or challenges in data analysis and how you’ll address them.

Example: “Quantitative data from the surveys and questionnaires will be analyzed using appropriate statistical software (e.g., SPSS, R). Descriptive statistics (means, standard deviations, frequencies) will be calculated for demographic variables and outcome measures. Inferential statistics, such as t-tests and analysis of variance (ANOVA), will be used to compare the intervention and control groups on knowledge, dietary intake, and behavior changes.

For the qualitative data from focus group discussions, a thematic analysis approach will be employed. Transcripts will be coded and analyzed to identify recurring themes and patterns related to students’ experiences, perceptions, and suggestions for program improvement.

Potential limitations include self-report bias in dietary intake measures and the possibility of non-response or attrition during the study. To address these limitations, multiple data collection methods will be utilized, and appropriate statistical techniques (e.g., intent-to-treat analysis) will be employed to handle missing data.”

Ethical Considerations

Address any ethical issues or concerns related to your proposed intervention research. Discuss how you’ll protect participants’ rights, maintain confidentiality, and address potential risks or benefits.

Example: “Ethical considerations are of utmost importance in this study, as it involves minors (children) as participants. The research protocol will be submitted to the Institutional Review Board (IRB) for approval to ensure adherence to ethical principles and guidelines.

Informed consent will be obtained from parents/guardians before enrolling their children in the study. The consent form will clearly explain the purpose, procedures, potential risks and benefits, and the voluntary nature of participation. Children will also be asked to provide their assent to participate.

Confidentiality of participants’ data will be maintained throughout the study. All collected data will be de-identified and stored securely, with access limited to the research team members. Participants will be assigned unique codes to protect their identities.

While the proposed intervention is educational and poses minimal risk, potential risks may include discomfort or anxiety during data collection procedures. Trained personnel will be present to address any concerns or distress experienced by participants. Additionally, participants will be informed of their right to withdraw from the study at any time without consequence.

Potential benefits of the study include increased knowledge and awareness about healthy eating habits, improved dietary choices, and the promotion of overall well-being among participating students. The findings may also contribute to the development of effective school-based nutrition education programs, benefiting a broader population of children.”

Significance and Implications

Explain the potential significance and implications of your proposed intervention research. Discuss how your findings could contribute to the existing body of knowledge, inform practice, or influence policy. Highlight the potential impact on the target population and relevant stakeholders.

Example: “The proposed intervention research has the potential to make significant contributions to the field of nutrition education and childhood obesity prevention. By evaluating the effectiveness of a comprehensive school-based program, this study can provide valuable insights into evidence-based strategies for promoting healthy eating behaviors among elementary school students.

If the intervention is found to be effective, the program can be replicated and implemented in other schools or districts, potentially reaching a larger population of children and contributing to the overall efforts to combat childhood obesity. The findings may also inform school policies and practices related to nutrition education, cafeteria menus, and the promotion of healthy eating environments.

Furthermore, the study can contribute to the existing body of knowledge by identifying factors that influence the success or failure of nutrition education interventions, such as age, gender, socioeconomic status, or cultural backgrounds. This understanding can guide the development of tailored and culturally responsive interventions to address the diverse needs of different populations.

Ultimately, the successful implementation of the proposed intervention can have far-reaching implications for children’s health and well-being. By promoting healthy eating habits from an early age, the intervention may contribute to the prevention of obesity-related health issues, such as type 2 diabetes, cardiovascular diseases, and mental health problems. This, in turn, can have long-term benefits for individual well-being, healthcare costs, and societal productivity.”

Summarize the key points of your proposed intervention research paper. Reiterate the purpose, methodology, and potential significance of your study. Emphasize the importance of the research and its potential impact on the target population and related fields.

Example: “In conclusion, this proposed intervention research aims to design and evaluate a comprehensive school-based nutrition education program for elementary school students. The program will employ evidence-based strategies and interactive approaches to enhance students’ knowledge, attitudes, and behaviors related to healthy eating.

The study will utilize a quasi-experimental design, with data collected through various quantitative and qualitative methods, including surveys, dietary assessments, focus group discussions, and observational measures. Rigorous data analysis techniques will be employed to evaluate the effectiveness of the intervention and explore factors influencing its success.

The findings of this research have the potential to make significant contributions to the fields of nutrition education, childhood obesity prevention, and public health. By identifying effective strategies for promoting healthy eating habits among children, the study can inform the development and implementation of similar programs in schools and communities, ultimately contributing to the overall well-being of children and addressing a pressing public health concern.

It is hoped that this proposed intervention research will not only advance scientific knowledge but also have a positive impact on the target population and pave the way for future research and practice in this important area.”

Include a list of all the sources you cited in your paper, formatted according to the appropriate citation style (e.g., APA, MLA, Chicago).

Example (APA style):

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice-Hall, Inc.

Centers for Disease Control and Prevention. (2022). Childhood obesity facts. https://www.cdc.gov/obesity/data/childhood.html

Johnson, P., Smith, J., & Williams, K. (2020). Effectiveness of a school-based nutrition intervention on dietary behaviors of middle school students. Journal of School Health, 90(5), 345-354. https://doi.org/10.1111/josh.12880

Jones, A., Brown, L., & Taylor, R. (2015). The development and validation of a nutrition knowledge questionnaire for elementary school students. Journal of Nutrition Education and Behavior, 47(4), 346-353. https://doi.org/10.1016/j.jneb.2015.02.001

Smith, J., Wilson, K., & Davis, M. (2018). A school-based nutrition education program: Effects on dietary behaviors of sixth-grade students. Journal of Nutrition Education and Behavior, 50(6), 556-563. https://doi.org/10.1016/j.jneb.2018.02.007

Related Articles

How to Write a Psychology Research Proposal

How to Write an Intervention Plan [+ Template]

Start by filling this short order form order.studyinghq.com

And then follow the progressive flow. 

Having an issue, chat with us here

Cathy, CS. 

New Concept ? Let a subject expert write your paper for You​

Avatar of rachel r. N.

Post navigation

Previous post.

📕 Studying HQ

Typically replies within minutes

Hey! 👋 Need help with an assignment?

🟢 Online | Privacy policy

WhatsApp us

intervention plan in research

Search form

intervention plan in research

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 1. Designing Community Interventions

Chapter 18 Sections

  • Section 2. Participatory Approaches to Planning Community Interventions
  • Section 3. Identifying Targets and Agents of Change: Who Can Benefit and Who Can Help
  • Section 4. Using Community Sectors to Reach Targets and Agents of Change

 

The Tool Box needs your help
to remain available.

Your contribution can help change lives.
.

 

Sixteen training modules
for teaching core skills.
.

  • Main Section
Learn how to develop a program or a policy change that focuses on people's behaviors and how changes in the environment can support those behaviors.
Adapted from "Conducting intervention research: The design and development process" by Stephen B. Fawcett et al.

You've put together a group of motivated, savvy people, who really want to make a difference in the community. Maybe you want to increase adults' physical activity and reduce risks for heart attacks; perhaps you want kids to read more and do better in school. Whatever you want to do, the end is clear enough, but the means--ah, the means are giving you nightmares. How do you reach that goal your group has set for itself? What are the best things to do to achieve it?

Generally speaking, what you're thinking about is intervening in people's environments, making it easier and more rewarding for people to change their behaviors. In the case of encouraging people's physical activity, you might provide information about opportunities, increase access to opportunities, and enhance peer support. Different ways to do this are called, sensibly enough, interventions . Comprehensive interventions combine the various components needed to make a difference.

What is an intervention?

But what exactly is an intervention? Well, what it is can vary. It might be a program , a change in policy , or a certain practice that becomes popular. What is particularly important about interventions, however is what they do. Interventions focus on people's behaviors, and how changes in the environment can support those behaviors. For example, a group might have the goal of trying to stop men from raping women.

However, it's clearly not enough to broadcast messages saying, "You shouldn't commit a rape." And so, interventions that are more successful attempt to improve the conditions that allow and encourage those behaviors to occur. So interventions that might be used to stop rape include:

  • Improving street lighting to make it easier to avoid potential attackers
  • A "safe ride" program giving free rides so people don't need to walk alone after dark
  • Skills training on date rape and how to avoid it, so that women will practice more careful decision making on dates with men they don't know well, especially in regard to using alcohol and drugs
  • Policy changes such as stronger penalties on people who commit rapes, or that simplify the process a rape victim must go through to bring the perpetrator to justice

Why should you develop interventions?

There are many strong advantages to using interventions as a means to achieve your goals. Some are very apparent; some possibly less so. Some of the more important of these advantages are:

  • By designing and implementing interventions in a clear, systematic manner, you can improve the health and well-being of your community and its residents.
  • Interventions promote understanding of the condition you are working on and its causes and solutions. Simply put, when you do something well, people notice, and the word slowly spreads. In fact, such an intervention can produce a domino effect, sparking others to understand the issue you are working on and to work on it themselves.
For example, a grade school principal in the Midwest was struck by the amount of unsupervised free time students had between three and six o'clock, when their parents got home from work. From visiting her own mother in a nursing home, she knew, too, of the loneliness felt by many residents of such homes. So she decided to try to lessen both problems by starting a "Caring Hearts" program. Students went to nursing homes to see elders after school once or twice a week to visit, play games, and exchange stories. Well, a reporter heard about the program, and did a feature article on it on the cover of the "Community Life" section of the local newspaper. The response was tremendous . Parents from all across town wanted their children involved, and similar programs were developed in several schools throughout the town.
  • To do what you are already doing better. Finally, learning to design an intervention properly is important because you are probably doing it already . Most of us working to improve the health and well-being of members of our community design (or at least run) programs, or try to change policies such as local laws or school board regulations, or try to change the things some people regularly practice. By better understanding the theories behind choosing, designing, and developing an intervention, you will improve on the work you are currently doing.

When should you develop an intervention?

It makes sense to develop or redesign an intervention when:

  • There is a community issue or problem that local people and organizations perceive as an unfilled need
  • Your organization has the resources, ability, and desire to fill that need, and
  • You have decided that your group is the appropriate one to accomplish it

The last of these three points deserves some explanation. There will always be things that your organization could do, that quite probably should be left to other organizations or individuals. For example, a volunteer crisis counseling center might find they have the ability to serve as a shelter for people needing a place to stay for a few nights. However, doing so would strain their resources and take staff and volunteers away from the primary mission of the agency.

In cases like this, where could does not equal should , your organization might want to think twice about developing a new intervention that will take away from the mission.

How do you develop an intervention?

So, people are mobilized, the coffee's hot, and you're ready to roll. Your group is ready to take on the issue--you want to design an intervention that will really improve conditions in the area. How do you start?

Decide what needs to happen

This could be a problem that needs to be solved, such as, "too many students are dropping out of school." However, it might be also a good thing, and you want to find a way to make more of it happen. For example, you might want to find a way to convince more adults to volunteer with school-aged children. At this point, you will probably want to define the problem broadly, as you will be learning more about it in the next few steps. Keep in mind these questions as you think about this:

  • What behavior needs to change?
  • Whose behavior needs to change?
  • If people are going to change their behavior, what changes in the environment need to occur to make it happen? For example, if you want people to recycle, you'll have much better results if there is easy access to recycling bins.
  • What specific changes should happen as a result of the intervention?

You don't need to have answers to all of these questions at this point. In fact, it's probably better to keep an open mind until you gather more information, including by talking with people who are affected (we'll get to that in the next few steps ). But thinking about these questions will help orient you and get you geared in the right direction.

Use a measurement system to gather information about the level of the problem

You will need to gather information about the level of the problem before you do anything to see if it is as serious as it seems, and to establish a standard for later improvement (or worsening).

Measurement instruments include:

  • Direct observations of behavior. For example, you can watch whether merchants sell alcohol to people under the age of 21.
  • Behavioral surveys. For example, the Youth Risk Behavior Survey of the U.S. Centers for Disease Control and Prevention asks questions about drug use, unprotected sexual activity, and violence.
  • Interviews with key people. For example, you might ask about changes in programs, policies, and practices that the group helped bring about.
  • Review of archival or existing records. For example, we might look at records of the rate of adolescent pregnancy, unemployment, or children living in poverty.

The group might review the level of the problem over time to detect trends--is the problem getting better or worse? It also might gather comparison information-- how are we doing compared to other, similar communities?

Decide who the intervention should help

In a childhood immunization program, your interventions would be aimed at helping children. Likewise, in a program helping people to live independently, the intervention would try to help older adults or people with disabilities. Your intervention might not be targeted at all, but be for the entire community. For example, perhaps you are trying to increase the amount of policing to make local parks safer. This change of law enforcement policy would affect people throughout the community.

Usually, interventions will target the people who will directly benefit from the intervention, but this isn't always the case. For example, a program to try to increase the number of parents and guardians who bring in their children for immunizations on time would benefit the children most directly. However, interventions wouldn't target them, since children aren't the ones making the decision. Instead, the primary "targets of change" for your interventions might be parents and health care professionals.

Before we go on, some brief definitions may be helpful. Targets of change are those people whose behavior you are trying to change. As we saw above, these people may be--but are not always--the same people who will benefit directly from the intervention. They often include others, such as public officials, who have the power to make needed changes in the environment. Agents of change are those people who can help make change occur. Examples might be local residents, community leaders, and policy makers. The "movers and the shakers," they are the ones who can make things happen--and who you definitely want to contribute to the solution.

Involve potential clients or end users of the intervention

Once you have decided broadly what should happen and who it should happen with, you need to make sure you have involved the people affected. Even if you think you know what they want--ask anyway. For your intervention to be successful, you can't have too much feedback. Some of these folks will likely have a perspective on the issue you hadn't even thought of.

Also, by asking for their help, the program becomes theirs. For example, by giving teachers and parents input in designing a "school success" intervention, they take "ownership" for the program. They become proud of it--which means they won't only use it, they?ll also support it and tell their friends, and word will spread.

Again, for ideas on how to find and choose these people, the section mentioned above on targets and agents of change may be helpful.

Identify the issues or problems you will attempt to solve together

There are a lot of ways in which you can talk with people affected about the information that interests you. Some of the more common methods include:

  • Informal personal contact - just talking with people, and seeing what they have to say
  • Focus groups
  • Community forums
  • Concerns surveys

When you are talking to people, try and get at the real issue --the one that is the underlying reason for what's going on. It's often necessary to focus not on the problem itself, but on affecting the cause of the problem.

For example, if you want to reduce the number of people in your town who are homeless, you need to find out why so many people in your town lack decent shelter: Do they lack the proper skills to get jobs? Is there a large mentally ill population that isn't receiving the help it should? Your eventual intervention may address deeper causes, seeming to have little to do with reducing homelessness directly, although that remains the goal.

Analyze these problems or the issue to be addressed in the intervention

Using the information you gathered in step five, you need to decide on answers to some important questions. These will depend on your situation, but many of the following questions might be appropriate for your purpose:

  • What factors put people at risk for (or protect them against) the problem or concern?
  • Whose behavior (or lack of behavior) caused the problem?
  • Whose behavior (or lack of behavior) maintains the problem?
  • For whom is the situation a problem?
  • What are the negative consequences for those directly affected?
  • What are the negative consequences for the community?
  • Who, if anyone, benefits from things being the way they are now?
  • How do they benefit?
  • Who should share the responsibility for solving the problem?
  • What behaviors need to change to consider the problem "solved"?
  • What conditions need to change to address the issue or problem?
  • How much change is necessary?
  • At what level(s) should the problem be addressed? Is it something that should be addressed by individuals; by families working together; by local organizations or neighborhoods; or at the level of the city, town, or broader environment?
  • Will you be able to make changes at the level(s) identified? This question includes technical capability, ensuring you have enough money to do it, and that it is going to be politically possible.

Set goals and objectives

When you have gotten this far, you are ready to set the broad goals and objectives of what the intervention will do. Remember, at this point you still have NOT decided what that intervention will be. This may seem a little backwards to your normal thinking--but we're starting from the finish line, and asking you to move backwards. Give it a try--we think it will work for you.

Specifically, you will want to answer the following questions as concretely as you can:

  • What should the intervention accomplish? For example, your goal might be for most of the homeless people who are able to hold jobs do so by the end of the intervention.
  • What will success look like? If your intervention is successful, how will you know it? How will you explain to other people that the intervention has worked? What are the "benchmarks" or indicators that show you are moving in the right direction?
  • For example, you might say, "By 2010 (when), 80% of those now homeless (who) will be successfully employed at least part time (change sought)."

Learn what others have done

Now, armed with all of the information you have found so far, you are ready to start concentrating on the specific intervention itself. The easiest way to start this is by finding out what other people in your situation have done. Don't reinvent the wheel! There might be some "best practices"-- exceptional programs or policies--out there that are close to what you want to do. It's worth taking the time to try to find them.

Where do you look for promising approaches? There are a lot of possibilities, and how exhaustive your search will be will depend on the time and resources you have (not to mention how long it takes you to find something you like!) But some of the more common resources you might start with include:

  • See what local examples are available. What has worked in your community? How about in nearby places? Can you figure out why it worked? If possible, talk to the people responsible for those approaches, and try to understand why and how they did what they did.
  • Look for examples of what has been done in articles and studies in related fields. Sources might be professional journals, such as the American Journal of Public Health, or even occasionally, general news magazines. Also, look at interventions that have been done for related problems--perhaps they can be adapted for use by your group. Information and awareness events, for example, tend to be general in nature--you can do a similar event and change what it's for. A 5-K race might be planned, for example, to raise awareness of and money for breast cancer, to protest environmental destruction, and so on.
  • National conferences. If you can, attending national meetings or conferences on the problem or issue you are trying to solve can give you excellent insight on some of the "best practices" that are out there.

Brainstorm ideas of your own

Take a sheet of paper and write down all of the possibilities you can think of. If you are deciding as a group, this could be done on poster paper attached to a wall, so everyone can see the possibilities-- this often works to help people come up with other ideas. Be creative!

Try to decide what interventions or parts of interventions have worked, and what might be applicable to your situation

What can your organization afford to do? And by afford, we mean financially, politically, time, and resource wise. For example, how much time can you put into this? Will the group lose stature in the community, or support from certain people, by doing a particular intervention?

When you are considering interventions done by others, look specifically for ones that are:

  • Appropriate - Do they fit the group's purpose?
  • Effective - Did they make a difference on behavior and outcome?
  • Replicable - Are the details and results of what happened in the original intervention explained well enough to repeat what was done? Unfortunately, this isn't always the case--many people, when you talk to them, will say, "Oh! We just did it! "
  • Simple - Is it clear enough for people in your group to do?
  • Practical - Do we have the time and money to do this?
  • Compatible with your situation - Does it fit local needs, resources, and values

Identify barriers and resistance you might come up against

What barriers and resistance might we face? How can they be overcome? Be prepared for whatever may come your way.

For example, a youth group to prevent substance use wanted to outlaw smoking on the high school campus by everyone, including the teachers and other staff members. However, they knew they would come up against resistance among teachers and staff members who smoked. How might they overcome that opposition?

Identify core components and elements of the intervention

Here is where we get to the nuts and bolts of designing an intervention.

First, decide the core components that will be used in the intervention. Much like broad strategies, these are the general things you will do as part of the intervention. They are the "big ideas" that can then be further broken down.

There are four classes of components to consider when designing your intervention:

  • Providing information and skills training
  • Enhancing support and resources
  • Modifying access and barriers
  • Monitoring and giving feedback

A comprehensive intervention will choose components for each of these four categories. For example, a youth mentoring program might choose the following components:

  • For providing information and skills training, a component might be recruitment of youth and mentors
  • For enhancing support and reinforcement, a component might be arranging celebrations among program participants
  • For modifying access and barriers, a component might be making it easier to volunteer
  • For monitoring and giving feedback, a component might be tracking the number of young people and volunteers involved

Next, decide the specific elements that compose each of the components. These elements are the distinct activities that will be done to implement the components.

For example, a comprehensive effort to prevent youth smoking might include public awareness and skills training, restricting tobacco advertising, and modifying access to tobacco products. For the component of trying to modify access, an element of this strategy might be to do 'stings' at convenience stores to see which merchants are selling tobacco illegally to teens. Another element might be to give stiffer penalties to teens who try to buy cigarettes, and to those merchants who sell.

Develop an action plan to carry out the intervention

When you are developing your action plan , you will want it to answer the following questions:

  • What components and elements will be implemented?
  • Who should implement what by when?
  • What resources and support are needed? What are available?
  • What potential barriers or resistance are expected? How will they be minimized?
  • What individuals or organizations need to be informed? What do you need to tell them?

Pilot-test your intervention

None of us likes to fall flat on our face, but frankly, it's a lot easier when there aren't very many people there to watch us, and when there isn't a lot on the line. By testing your intervention on a small scale, you have the chance to work out the bugs and get back on your feet before the crowd comes in. When doing your pilot test, you need to do the following things:

  • Decide how the intervention will be tested on a small scale
  • Evaluate your results
  • Pay particular attention to unintended consequences or side effects that you find when you evaluate your work
  • Use feedback from those who tried the intervention to simplify and refine your plan

Implement your intervention

If you have followed all of the steps above, implementing your action plan will be easier. Go to it!

Constantly monitor and evaluate your work

When the wheels are turning and things seem to be under control, congratulations! You have successfully implemented your intervention! But of course, the work never ends. It's important to see if the intervention is working , and to "tweak" it and make changes as necessary.

Designing an intervention, and doing it well, isn't necessarily an easy task. There are a lot of steps involved, and a lot of work to be done, if you are going to do it well. But by systematically going through the process, you are able to catch mistakes  before they happen; you can stand on the shoulders of those who have done this work before you and learn from their successes and failures.

Online Resources

Community Health Adviso r from the Robert Wood Johnson Foundation is a helpful online tool with detailed information about evidence-based polices and programs to reduce tobacco use and increase physical activity in communities.

The Society for Community Research and Action serves many different disciplines that are involved in strategies to improve communities. It hosts a general electronic discussion list as well as several by special interest.

The U.S. Dept. of Housing and Urban Development features " Success Stories " and gives ideas for ways to solve problems in your community.

The National Civic League provides a database of  Success Stories .

The  Pew Partnership for Civic Change  offers several resources for promising solutions for building strong communities.

The World Health Organization provides information on many types of interventions around the world.

Print Resources

Fawcett, S., Suarez, Y. Balcazar, F., White, G., Paine, A., Blanchard, K., & Embree, M. (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.),  Intervention research: Design and development for human service . (pp. 25-54). New York, NY: Haworth Press.

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Implementation...

Implementation research: what it is and how to do it

  • Related content
  • Peer review
  • David H Peters , professor 1 ,
  • Taghreed Adam , scientist 2 ,
  • Olakunle Alonge , assistant scientist 1 ,
  • Irene Akua Agyepong , specialist public health 3 ,
  • Nhan Tran , manager 4
  • 1 Johns Hopkins University Bloomberg School of Public Health, Department of International Health, 615 N Wolfe St, Baltimore, MD 21205, USA
  • 2 Alliance for Health Policy and Systems Research, World Health Organization, CH-1211 Geneva 27, Switzerland
  • 3 University of Ghana School of Public Health/Ghana Health Service, Accra, Ghana
  • 4 Alliance for Health Policy and Systems Research, Implementation Research Platform, World Health Organization, CH-1211 Geneva 27, Switzerland
  • Correspondence to: D H Peters  dpeters{at}jhsph.edu
  • Accepted 8 October 2013

Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it

The field of implementation research is growing, but it is not well understood despite the need for better research to inform decisions about health policies, programmes, and practices. This article focuses on the context and factors affecting implementation, the key audiences for the research, implementation outcome variables that describe various aspects of how implementation occurs, and the study of implementation strategies that support the delivery of health services, programmes, and policies. We provide a framework for using the research question as the basis for selecting among the wide range of qualitative, quantitative, and mixed methods that can be applied in implementation research, along with brief descriptions of methods specifically suitable for implementation research. Expanding the use of well designed implementation research should contribute to more effective public health and clinical policies and programmes.

Defining implementation research

Implementation research attempts to solve a wide range of implementation problems; it has its origins in several disciplines and research traditions (supplementary table A). Although progress has been made in conceptualising implementation research over the past decade, 1 considerable confusion persists about its terminology and scope. 2 3 4 The word “implement” comes from the Latin “implere,” meaning to fulfil or to carry into effect. 5 This provides a basis for a broad definition of implementation research that can be used across research traditions and has meaning for practitioners, policy makers, and the interested public: “Implementation research is the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).”

Implementation research can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation, including how to introduce potential solutions into a health system or how to promote their large scale use and sustainability. The intent is to understand what, why, and how interventions work in “real world” settings and to test approaches to improve them.

Principles of implementation research

Implementation research seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects. This implies working with populations that will be affected by an intervention, rather than selecting beneficiaries who may not represent the target population of an intervention (such as studying healthy volunteers or excluding patients who have comorbidities).

Context plays a central role in implementation research. Context can include the social, cultural, economic, political, legal, and physical environment, as well as the institutional setting, comprising various stakeholders and their interactions, and the demographic and epidemiological conditions. The structure of the health systems (for example, the roles played by governments, non-governmental organisations, other private providers, and citizens) is particularly important for implementation research on health.

Implementation research is especially concerned with the users of the research and not purely the production of knowledge. These users may include managers and teams using quality improvement strategies, executive decision makers seeking advice for specific decisions, policy makers who need to be informed about particular programmes, practitioners who need to be convinced to use interventions that are based on evidence, people who are influenced to change their behaviour to have a healthier life, or communities who are conducting the research and taking action through the research to improve their conditions (supplementary table A). One important implication is that often these actors should be intimately involved in the identification, design, and conduct phases of research and not just be targets for dissemination of study results.

Implementation outcome variables

Implementation outcome variables describe the intentional actions to deliver services. 6 These implementation outcome variables—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability—can all serve as indicators of the success of implementation (table 1 ⇓ ). Implementation research uses these variables to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.

 Implementation outcome variables

  • View inline

Implementation strategies

Curran and colleagues defined an “implementation intervention” as a method to “enhance the adoption of a ‘clinical’ intervention,” such as the use of job aids, provider education, or audit procedures. 7 The concept can be broadened to any type of strategy that is designed to support a clinical or population and public health intervention (for example, outreach clinics and supervision checklists are implementation strategies used to improve the coverage and quality of immunisation).

A review of ways to improve health service delivery in low and middle income countries identified a wide range of successful implementation strategies (supplementary table B). 8 Even in the most resource constrained environments, measuring change, informing stakeholders, and using information to guide decision making were found to be critical to successful implementation.

Implementation influencing variables

Other factors that influence implementation may need to be considered in implementation research. Sabatier summarised a set of such factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources). 9

The large array of contextual factors that influence implementation, interact with each other, and change over time highlights the fact that implementation often occurs as part of complex adaptive systems. 10 Some implementation strategies are particularly suitable for working in complex systems. These include strategies to provide feedback to key stakeholders and to encourage learning and adaptation by implementing agencies and beneficiary groups. Such strategies have implications for research, as the study methods need to be sufficiently flexible to account for changes or adaptations in what is actually being implemented. 8 11 Research designs that depend on having a single and fixed intervention, such as a typical randomised controlled trial, would not be an appropriate design to study phenomena that change, especially when they change in unpredictable and variable ways.

Another implication of studying complex systems is that the research may need to use multiple methods and different sources of information to understand an implementation problem. Because implementation activities and effects are not usually static or linear processes, research designs often need to be able to observe and analyse these sometimes iterative and changing elements at several points in time and to consider unintended consequences.

Implementation research questions

As in other types of health systems research, the research question is the king in implementation research. Implementation research takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used. Implementation research questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective (examples are in supplementary table C). 12 13

Implementation research can overlap with other types of research used in medicine and public health, and the distinctions are not always clear cut. A range of implementation research exists, based on the centrality of implementation in the research question, the degree to which the research takes place in a real world setting with routine populations, and the role of implementation strategies and implementation variables in the research (figure ⇓ ).

Spectrum of implementation research 33

  • Download figure
  • Open in new tab
  • Download powerpoint

A more detailed description of the research question can help researchers and practitioners to determine the type of research methods that should be used. In table 2 ⇓ , we break down the research question first by its objective: to explore, describe, influence, explain, or predict. This is followed by a typical implementation research question based on each objective. Finally, we describe a set of research methods for each type of research question.

 Type of implementation research objective, implementation question, and research methods

Much of evidence based medicine is concerned with the objective of influence, or whether an intervention produces an expected outcome, which can be broken down further by the level of certainty in the conclusions drawn from the study. The nature of the inquiry (for example, the amount of risk and considerations of ethics, costs, and timeliness), and the interests of different audiences, should determine the level of uncertainty. 8 14 Research questions concerning programmatic decisions about the process of an implementation strategy may justify a lower level of certainty for the manager and policy maker, using research methods that would support an adequacy or plausibility inference. 14 Where a high risk of harm exists and sufficient time and resources are available, a probability study design might be more appropriate, in which the result in an area where the intervention is implemented is compared with areas without implementation with a low probability of error (for example, P< 0.05). These differences in the level of confidence affect the study design in terms of sample size and the need for concurrent or randomised comparison groups. 8 14

Implementation specific research methods

A wide range of qualitative and quantitative research methods can be used in implementation research (table 2 ⇑ ). The box gives a set of basic questions to guide the design or reporting of implementation research that can be used across methods. More in-depth criteria have also been proposed to assess the external validity or generalisability of findings. 15 Some research methods have been developed specifically to deal with implementation research questions or are particularly suitable to implementation research, as identified below.

Key questions to assess research designs or reports on implementation research 33

Does the research clearly aim to answer a question concerning implementation?

Does the research clearly identify the primary audiences for the research and how they would use the research?

Is there a clear description of what is being implemented (for example, details of the practice, programme, or policy)?

Does the research involve an implementation strategy? If so, is it described and examined in its fullness?

Is the research conducted in a “real world” setting? If so, is the context and sample population described in sufficient detail?

Does the research appropriately consider implementation outcome variables?

Does the research appropriately consider context and other factors that influence implementation?

Does the research appropriately consider changes over time and the level of complexity of the system, including unintended consequences?

Pragmatic trials

Pragmatic trials, or practical trials, are randomised controlled trials in which the main research question focuses on effectiveness of an intervention in a normal practice setting with the full range of study participants. 16 This may include pragmatic trials on new healthcare delivery strategies, such as integrated chronic care clinics or nurse run community clinics. This contrasts with typical randomised controlled trials that look at the efficacy of an intervention in an “ideal” or controlled setting and with highly selected patients and standardised clinical outcomes, usually of a short term nature.

Effectiveness-implementation hybrid trials

Effectiveness-implementation hybrid designs are intended to assess the effectiveness of both an intervention and an implementation strategy. 7 These studies include components of an effectiveness design (for example, randomised allocation to intervention and comparison arms) but add the testing of an implementation strategy, which may also be randomised. This might include testing the effectiveness of a package of delivery and postnatal care in under-served areas, as well testing several strategies for providing the care. Whereas pragmatic trials try to fix the intervention under study, effectiveness-implementation hybrids also intervene and/or observe the implementation process as it actually occurs. This can be done by assessing implementation outcome variables.

Quality improvement studies

Quality improvement studies typically involve a set of structured and cyclical processes, often called the plan-do-study-act cycle, and apply scientific methods on a continuous basis to formulate a plan, implement the plan, and analyse and interpret the results, followed by an iteration of what to do next. 17 18 The focus might be on a clinical process, such as how to reduce hospital acquired infections in the intensive care unit, or management processes such as how to reduce waiting times in the emergency room. Guidelines exist on how to design and report such research—the Standards for Quality Improvement Reporting Excellence (SQUIRE). 17

Speroff and O’Connor describe a range of plan-do-study-act research designs, noting that they have in common the assessment of responses measured repeatedly and regularly over time, either in a single case or with comparison groups. 18 Balanced scorecards integrate performance measures across a range of domains and feed into regular decision making. 19 20 Standardised guidance for using good quality health information systems and health facility surveys has been developed and often provides the sources of information for these quasi-experimental designs. 21 22 23

Participatory action research

Participatory action research refers to a range of research methods that emphasise participation and action (that is, implementation), using methods that involve iterative processes of reflection and action, “carried out with and by local people rather than on them.” 24 In participatory action research, a distinguishing feature is that the power and control over the process rests with the participants themselves. Although most participatory action methods involve qualitative methods, quantitative and mixed methods techniques are increasingly being used, such as for participatory rural appraisal or participatory statistics. 25 26

Mixed methods

Mixed methods research uses both qualitative and quantitative methods of data collection and analysis in the same study. Although not designed specifically for implementation research, mixed methods are particularly suitable because they provide a practical way to understand multiple perspectives, different types of causal pathways, and multiple types of outcomes—all common features of implementation research problems.

Many different schemes exist for describing different types of mixed methods research, on the basis of the emphasis of the study, the sampling schemes for the different components, the timing and sequencing of the qualitative and quantitative methods, and the level of mixing between the qualitative and quantitative methods. 27 28 Broad guidance on the design and conduct of mixed methods designs is available. 29 30 31 A scheme for good reporting of mixed methods studies involves describing the justification for using a mixed methods approach to the research question; describing the design in terms of the purpose, priority, and sequence of methods; describing each method in terms of sampling, data collection, and analysis; describing where the integration has occurred, how it has occurred, and who has participated in it; describing any limitation of one method associated with the presence of the other method; and describing any insights gained from mixing or integrating methods. 32

Implementation research aims to cover a wide set of research questions, implementation outcome variables, factors affecting implementation, and implementation strategies. This paper has identified a range of qualitative, quantitative, and mixed methods that can be used according to the specific research question, as well as several research designs that are particularly suited to implementation research. Further details of these concepts can be found in a new guide developed by the Alliance for Health Policy and Systems Research. 33

Summary points

Implementation research has its origins in many disciplines and is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention

In health research, these intentions can be policies, programmes, or individual practices (collectively called interventions)

Implementation research seeks to understand and work in “real world” or usual practice settings, paying particular attention to the audience that will use the research, the context in which implementation occurs, and the factors that influence implementation

A wide variety of qualitative, quantitative, and mixed methods techniques can be used in implementation research, which are best selected on the basis of the research objective and specific questions related to what, why, and how interventions work

Implementation research may examine strategies that are specifically designed to improve the carrying out of health interventions or assess variables that are defined as implementation outcomes

Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability

Cite this as: BMJ 2013;347:f6753

Contributors: All authors contributed to the conception and design, analysis and interpretation, drafting the article, or revising it critically for important intellectual content, and all gave final approval of the version to be published. NT had the original idea for the article, which was discussed by the authors (except OA) as well as George Pariyo, Jim Sherry, and Dena Javadi at a meeting at the World Health Organization (WHO). DHP and OA did the literature reviews, and DHP wrote the original outline and the draft manuscript, tables, and boxes. OA prepared the original figure. All authors reviewed the draft article and made substantial revisions to the manuscript. DHP is the guarantor.

Funding: Funding was provided by the governments of Norway and Sweden and the UK Department for International Development (DFID) in support of the WHO Implementation Research Platform, which financed a meeting of authors and salary support for NT. DHP is supported by the Future Health Systems research programme consortium, funded by DFID for the benefit of developing countries (grant number H050474). The funders played no role in the design, conduct, or reporting of the research.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support for the submitted work as described above; NT and TA are employees of the Alliance for Health Policy and Systems Research at WHO, which is supporting their salaries to work on implementation research; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Provenance and peer review: Invited by journal; commissioned by WHO; externally peer reviewed.

  • ↵ Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health: translating science to practice. Oxford University Press, 2012.
  • ↵ Ciliska D, Robinson P, Armour T, Ellis P, Brouwers M, Gauld M, et al. Diffusion and dissemination of evidence-based dietary strategies for the prevention of cancer. Nutr J 2005 ; 4 (1): 13 . OpenUrl CrossRef PubMed
  • ↵ Remme JHF, Adam T, Becerra-Posada F, D’Arcangues C, Devlin M, Gardner C, et al. Defining research to improve health systems. PLoS Med 2010 ; 7 : e1001000 . OpenUrl CrossRef PubMed
  • ↵ McKibbon KA, Lokker C, Mathew D. Implementation research. 2012. http://whatiskt.wikispaces.com/Implementation+Research .
  • ↵ The compact edition of the Oxford English dictionary. Oxford University Press, 1971.
  • ↵ Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2010 ; 38 : 65 -76. OpenUrl
  • ↵ Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012 ; 50 : 217 -26. OpenUrl CrossRef PubMed Web of Science
  • ↵ Peters DH, El-Saharty S, Siadat B, Janovsky K, Vujicic M, eds. Improving health services in developing countries: from evidence to action. World Bank, 2009.
  • ↵ Sabatier PA. Top-down and bottom-up approaches to implementation research. J Public Policy 1986 ; 6 (1): 21 -48. OpenUrl CrossRef
  • ↵ Paina L, Peters DH. Understanding pathways for scaling up health services through the lens of complex adaptive systems. Health Policy Plan 2012 ; 27 : 365 -73. OpenUrl Abstract / FREE Full Text
  • ↵ Gilson L, ed. Health policy and systems research: a methodology reader. World Health Organization, 2012.
  • ↵ Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012 ; 43 : 337 -50. OpenUrl CrossRef PubMed
  • ↵ Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG). Designing theoretically-informed implementation interventions. Implement Sci 2006 ; 1 : 4 . OpenUrl CrossRef PubMed
  • ↵ Habicht JP, Victora CG, Vaughn JP. Evaluation designs for adequacy, plausibility, and probability of public health programme performance and impact. Int J Epidemiol 1999 ; 28 : 10 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research. Eval Health Prof 2006 ; 29 : 126 -53. OpenUrl Abstract / FREE Full Text
  • ↵ Swarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al, for the CONSORT and Pragmatic Trials in Healthcare (Practihc) Groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008 ; 337 : a2390 . OpenUrl Abstract / FREE Full Text
  • ↵ Davidoff F, Batalden P, Stevens D, Ogrince G, Mooney SE, for the SQUIRE Development Group. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care 2008 ; 17 (suppl I): i3 -9. OpenUrl Abstract / FREE Full Text
  • ↵ Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Q Manage Health Care 2004 ; 13 (1): 17 -32. OpenUrl CrossRef
  • ↵ Peters DH, Noor AA, Singh LP, Kakar FK, Hansen PM, Burnham G. A balanced scorecard for health services in Afghanistan. Bull World Health Organ 2007 ; 85 : 146 -51. OpenUrl CrossRef PubMed Web of Science
  • ↵ Edward A, Kumar B, Kakar F, Salehi AS, Burnham G. Peters DH. Configuring balanced scorecards for measuring health systems performance: evidence from five years’ evaluation in Afghanistan. PLOS Med 2011 ; 7 : e1001066 . OpenUrl
  • ↵ Health Facility Assessment Technical Working Group. Profiles of health facility assessment method, MEASURE Evaluation, USAID, 2008.
  • ↵ Hotchkiss D, Diana M, Foreit K. How can routine health information systems improve health systems functioning in low-resource settings? Assessing the evidence base. MEASURE Evaluation, USAID, 2012.
  • ↵ Lindelow M, Wagstaff A. Assessment of health facility performance: an introduction to data and measurement issues. In: Amin S, Das J, Goldstein M, eds. Are you being served? New tools for measuring service delivery. World Bank, 2008:19-66.
  • ↵ Cornwall A, Jewkes R. “What is participatory research?” Soc Sci Med 1995 ; 41 : 1667 -76. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mergler D. Worker participation in occupational health research: theory and practice. Int J Health Serv 1987 ; 17 : 151 . OpenUrl Abstract / FREE Full Text
  • ↵ Chambers R. Revolutions in development inquiry. Earthscan, 2008.
  • ↵ Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Sage Publications, 2011.
  • ↵ Tashakkori A, Teddlie C. Mixed methodology: combining qualitative and quantitative approaches. Sage Publications, 2003.
  • ↵ Leech NL, Onwuegbuzie AJ. Guidelines for conducting and reporting mixed research in the field of counseling and beyond. Journal of Counseling and Development 2010 ; 88 : 61 -9. OpenUrl CrossRef Web of Science
  • ↵ Creswell JW. Mixed methods procedures. In: Research design: qualitative, quantitative and mixed methods approaches. 3rd ed. Sage Publications, 2009.
  • ↵ Creswell JW, Klassen AC, Plano Clark VL, Clegg Smith K. Best practices for mixed methods research in the health sciences. National Institutes of Health, Office of Behavioral and Social Sciences Research, 2011.
  • ↵ O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy 2008 ; 13 : 92 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Peters DH, Tran N, Adam T, Ghaffar A. Implementation research in health: a practical guide. Alliance for Health Policy and Systems Research, World Health Organization, 2013.
  • Rogers EM. Diffusion of innovations. 5th ed. Free Press, 2003.
  • Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci 2007 ; 2 : 40 . OpenUrl CrossRef PubMed
  • Victora CG, Schellenberg JA, Huicho L, Amaral J, El Arifeen S, Pariyo G, et al. Context matters: interpreting impact findings in child survival evaluations. Health Policy Plan 2005 ; 20 (suppl 1): i18 -31. OpenUrl Abstract

intervention plan in research

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Meta-Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Intervention Research: Developing Social Programs

Intervention Research: Developing Social Programs

  • Cite Icon Cite
  • Permissions Icon Permissions

When social workers draw on experience, theory, or data in order to develop new strategies or enhance existing ones, they are conducting intervention research. This relatively new field involves program design, implementation, and evaluation and requires a theory-based, systematic approach. Intervention Research presents such a framework. The five-step strategy described in this brief but thorough book ushers the reader from an idea’s germination through the process of writing a treatment manual, assessing program efficacy and effectiveness, and disseminating findings. Rich with examples drawn from child welfare, school-based prevention, medicine, and juvenile justice, Intervention Research relates each step of the process to current social work practice. It also explains how to adapt interventions for new contexts, and provides extensive examples of intervention research in fields such as child welfare, school-based prevention, medicine, and juvenile justice, and offers insights about changes and challenges in the field. This innovative pocket guide will serve as a solid reference for those already in the field, as well as help the next generation of social workers develop skills to contribute to the evolving field of intervention research.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
February 2024 1
July 2024 2
July 2024 2
July 2024 2
July 2024 2
July 2024 2
July 2024 3
July 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
September 2024 2
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

The University of Chicago The Law School

Meet the class: margaret lim, ’27, human welfare advocate with a global mindset aims to harness the law.

Margaret Lim smiles for the camera with a mountainous background.

Margaret, Lim, ’27, is a graduate of Cornell University where she double majored in government and English. A proud daughter of Filipino immigrant parents, Lim grew up in Searcy, Arkansas, later traveling to England to study at Oxford University. Her experience abroad was illuminating; it inspired her interest in global issues and communities. Now at UChicago Law, she hopes to use her legal education to advocate for human welfare around the world.

Please describe your professional background and path.

I grew up in a small town in Arkansas before heading to Cornell for college. Researching the diverse struggles of immigrants sparked my interest in finding solutions for social issues, and I moved to England to pursue an MPhil (Master of Philosophy in Evidence-Based Social Intervention and Policy Evaluation) at the University of Oxford.

What key experiences have shaped you?

Living in Oxford, with its vibrant international community, exposed me to opinions, ideas, and perspectives I had not encountered before. This exchange of information has inspired my interest in global issues and the collaborations needed to solve complex, systemic problems.

What motivated your decision to go to law school?

I hope to harness the power of words to protect global communities. I know that law school will provide the intellectual challenge and practical skills I need to achieve my goals.

Why did you select the University of Chicago Law School?

Crescat scientia; vita excolatur . “Let knowledge grow from more to more and so be human life enriched.” The school motto resonates deeply with me, and I look forward to being in a community that promotes interdisciplinary learning, discussion and respect of various beliefs, and learning for learning’s sake.

What do you plan to do with your legal education?

While I am eager to explore the many facets of legal education at UChicago Law and am open to an array of possibilities, I am currently interested in how research, legal systems, and social services can promote human welfare and protect populations. I am excited to gain insight from the Law School community as I navigate my career path.

What is the thing you are most looking forward to about being a law student?

I most look forward to learning from my professors and peers as well as engaging with all of the topics, skills, and knowledge that the Law School has to offer! I cannot wait to engage with an enriching and vibrant community full of legal scholars and students.

What are some of your hobbies or interests?

In my free time, I enjoy exploring new places, reading books, going on nature walks, and spending time with family and friends. I also like searching for the best pad Thai spot.

What is a “fun fact” about you?

I am scuba diving certified! When I was fourteen years old, I visited my parents’ home country, the Philippines. We explored my Lola’s backyard full of coral reefs. I would love to dive there again and explore other aquatic ecosystems in the world, like the Great Barrier Reef.

Anything else you’d like to share?

I am a proud daughter of loving, hard-working immigrant parents who have been champions of my education and inspired me to use my skills to better humanity. I am grateful to them, as well as all my family, friends, professors, and mentors who support me.

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001.

Cover of Health and Behavior

Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences.

  • Hardcopy Version at National Academies Press

7 Evaluating and Disseminating Intervention Research

Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.

The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.

Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.

The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients.

During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior ( Greenwald, 1984 ; Greenwald and Cullen, 1984 ; NHLBI, 1983 ): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions.

  • EVALUATING INTERVENTIONS

Assessing Outcomes

Choice of outcome measures.

The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease ( Frick et al., 1987 ; Lipid Research Clinics Program, 1984 ). Those trials usually were not associated with reductions in death from all causes ( Golomb, 1998 ; Muldoon et al, 1990 ). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death ( Bigger, 1984 ). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous ( Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989 ).

Many behavioral intervention trials document the capacity of interventions to modify risk factors ( NHLBI, 1998 ), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs.

Outcome Measurement

One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy ( Gold et al., 1996 ; Kaplan and Anderson, 1996 ) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient's health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory ( Torrence, 1976 ). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory ( Bennett and Torrence, 1996 ). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk ( Tversky and Fox, 1995 ). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum ( Zhu and Anderson, 1991 ). Some evidence supports the use of simple rating scales ( Anderson and Zalinski, 1990 ). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making ( Tversky and Shafir, 1992 ). A significant body of studies demonstrates that different methods for estimating preferences will produce different values ( Lenert and Kaplan, 2000 ). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states.

The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists ( Kaplan, 1994 ). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups ( Kaplan, 1998 ), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994 , for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis ( Gold et al., 1996 ).

Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990 ). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented ( Spilker, 1996 ). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient.

Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life ( Schneiderman et al., 1992 ). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine ( Gold et al., 1996 ).

Evaluating Clinical Interventions

Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below.

Evidence-Based Medicine

Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive ( Sackett et al., 1997 ). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1 . Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) ( Friedman et al., 1985 ). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight.

TABLE 7-1. Research Evidence Hierarchy.

Research Evidence Hierarchy.

There has been concern about the generalizability of RCTs ( Feinstien and Horwitz, 1997 ; Horwitz, 1987a , b ; Horwitz and Daniels, 1996 ; Horwitz et al., 1996 , 1990 ; Rabeneck et al., 1992 ), specifically because the recruitment of participants can result in samples that are not representative of the population ( Seligman, 1996 ). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings.

Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness ( Heaney and Goetzel, 1997 ; Mosteller and Colditz, 1996 ). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa ( Compas et al., 1998 ).

Efficacy and Effectiveness

Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the effects of a clinical intervention ( Chambless and Hollon, 1998 ) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial ( Matt and Navarro, 1997 ), others suggest that interventions are less effective in clinical settings than in the laboratory ( Weisz et al., 1992 ), and others find particular interventions equally effective in experimental and clinical settings ( Shadish et al., 1997 ).

The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments ( Chambless and Hollon, 1998 ). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders ( Compas et al., 1998 ).

An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.

Evaluating Community-Level Interventions

Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research ( Flay, 1986 ; Green et al., 1995 ; Greenwald and Cullen, 1984 ). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community ( McKinlay, 1993 ; Susser, 1995 ). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations.

Review criteria of community interventions have been suggested by Hancock and colleagues ( Hancock et al., 1997 ). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication.

Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power ( Koepsell et al., 1995 ; Murray, 1995 ). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities ( Israel et al., 1995 ; Klitzner, 1993 ; McKinlay, 1993 ; Susser, 1995 ). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention ( Fisher, 1995 ; McKinlay, 1993 ). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher's ability to control, it might be impossible to identify and maintain a commensurate comparison community ( Green et al., 1996 ; Hollister and Hill, 1995 ; Israel et al., 1995 ; Klitzner, 1993 ; Mittelmark et al., 1993 ; Susser, 1995 ). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” ( Green et al., 1996 , p. 274).

Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work ( Koepsell, 1998 ; Koepsell et al., 1992 ). This is the contribution of the Theory of Change model for communities described in Chapter 6 . A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work ( Bickman, 1987 ; Donaldson et al., 1994 ; Lipsey, 1993 ; Lipsey and Polard, 1989 ). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory ( Donaldson et al., 1994 ).

Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change ( Fisher, 1995 ; Susser, 1995 ; Winkleby, 1994 ). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research ( Israel et al., 1998 ).

Community-Based Participatory Research and Evaluation

As reviewed in Chapter 4 , broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health ( Brown, 1991 ; Gottlieb and McLeroy, 1994 ; Krieger, 1994 ; Krieger et al., 1993 ; Lalonde, 1974 ; Lantz et al., 1998 ; McKinlay, 1993 ; Sorensen et al., 1998a , b ; Stokols, 1992 , 1996 ; Susser and Susser, 1996a , b ; Williams and Collins, 1995 ; World Health Organization [WHO], 1986 ). It also points to the importance of expanding the evaluation of interventions to incorporate such factors ( Fisher, 1995 ; Green et al., 1995 ; Hatch et al., 1993 ; Israel et al., 1995 ; James, 1993 ; Pearce, 1996 ; Sorensen et al., 1998a , b ; Steckler et al., 1992 ; Susser, 1995 ).

This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others ( Israel et al., 1998 ). The partners contribute “unique strengths and shared responsibilities” ( Green et al., 1995 , p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members ( Dressler, 1993 ; Eng and Blanchard, 1990–1 ; Hatch et al., 1993 ; Israel et al., 1998 ; Schulz et al., 1998a ). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” ( Hatch et al., 1993 , p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 ( Israel et al., 1998 ). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved.

TABLE 7-2. Rationale for Community-Based Participatory Research.

Rationale for Community-Based Participatory Research.

Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation ( Israel et al., 1998 ). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process ( Cornwall, 1996 ; Dockery, 1996 ; Green et al., 1995 ). Although the principles are presented here as distinct items, community-based participatory research integrates them.

TABLE 7-3. Principles of Community-Based Participatory Research and Evaluation.

Principles of Community-Based Participatory Research and Evaluation.

There are four major foci of evaluation with implications for research design: context, process, impact, and outcome ( Israel, 1994 ; Israel et al., 1995 ; Simons-Morton et al., 1995 ). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature ( Campbell and Stanley, 1963 ; Cook and Reichardt, 1979 ; Dignan, 1989 ; Green, 1977 ; Green and Gordon, 1982 ; Green and Lewis, 1986 ; Guba and Lincoln, 1989 ; House, 1980 ; Israel et al., 1995 ; Patton, 1987 , 1990 ; Rossi and Freeman, 1989 ; Shadish et al., 1991 ; Stone et al., 1994 ; Thomas and Morgan, 1991 ; Windsor et al., 1994 ; Yin, 1993 ).

Context encompasses the events, influences, and changes that occur naturally in the project setting or environment during the intervention that might affect the outcomes ( Israel et al., 1995 ). Context data provide information about how particular settings facilitate or impede program success. Decisions must be made about which of the many factors in the context of an intervention might have the greatest effect on project success.

Evaluation of process assesses the extent, fidelity, and quality of the implementation of interventions ( McGraw et al., 1994 ). It describes the actual activities of the intervention and the extent of participant exposure, provides quality assurance, describes participants, and identifies the internal dynamics of program operations ( Israel et al., 1995 ).

A distinction is often made in the evaluation of interventions between impact and outcome ( Green and Lewis, 1986 ; Israel et al., 1995 ;

Simons-Morton et al., 1995 ; Windsor et al., 1994 ). Impact evaluation assesses the effectiveness of the intervention in achieving desired changes in targeted mediators. These include the knowledge, attitudes, beliefs, and behavior of participants. Outcome evaluation examines the effects of the intervention on health status, morbidity, and mortality. Impact evaluation focuses on what the intervention is specifically trying to change, and it precedes an outcome evaluation. It is proposed that if the intervention can effect change in some intermediate outcome (“impact”), the “final“ outcome will follow.

Although the association between impact and outcome may not always be substantiated (as discussed earlier in this chapter), impact may be a necessary measure. In some instances, the outcome goals are too far in the future to be evaluated. For example, childhood cardiovascular risk factor intervention studies typically measure intermediate gains in knowledge ( Parcel et al., 1989 ) and changes in diet or physical activity ( Simons-Morton et al., 1991 ). They sometimes assess cholesterol and blood pressure, but they do not usually measure heart disease because that would not be expected to occur for many years.

Given the aims and the dynamic context within which community-based participatory research and evaluation are conducted, methodologic flexibility is essential. Methods must be tailored to the purpose of the research and evaluation and to the context and interests of the community ( Beery and Nelson, 1998 ; deKoning and Martin, 1996 ; Dockery, 1996 ; Dressler, 1993 ; Green et al., 1995 ; Hall, 1992 ; Hatch et al., 1993 ; Israel et al., 1998 ; Marin and Marin, 1991 ; Nyden and Wiewel, 1992 ; Schulz et al., 1998b ; Singer, 1993 ; Stringer, 1996 ). Numerous researchers have suggested greater use of qualitative data, from in-depth interviews and observational studies, for evaluating the context, process, impact, and outcome of community-based participatory research interventions (Fortmann et al., 1995; Goodman, 1999 ; Hugentobler et al., 1992 ; Israel et al., 1995 , 1998 ; Koepsell et al., 1992 ; Mittelmark et al., 1993 ; Parker et al., 1998 ; Sorensen et al., 1998a ; Susser, 1995 ). Triangulation is the use of multiple methods and sources of data to overcome limitations inherent in each method and to improve the accuracy of the information collected, thereby increasing the validity and credibility of the results ( Denzin, 1970 ; Israel et al., 1995 ; Reichardt and Cook, 1980 ; Steckler et al., 1992 ). For examples of the integration of qualitative and quantitative methods in research and evaluation of public-health interventions, see Steckler et al. (1992) and Parker et al. (1998) .

Assessing Government Interventions

Despite the importance of legislation and regulation to promote public health, the effectiveness of government interventions are poorly understood. In particular, policymakers often cannot answer important empirical questions: do legal interventions work and at what economic and social cost? In particular, policymakers need to know whether legal interventions achieve their intended goals (e.g., reducing risk behavior). If so, do legal interventions unintentionally increase other risks (risk/risk tradeoff)? Finally, what are the adverse effects of regulation on personal or economic liberties and general prosperity in society? This is an important question not only because freedom has an intrinsic value in democracy, but also because activities that dampen economic development can have health effects. For example, research demonstrates the positive correlation between socioeconomic status and health ( Chapter 4 ).

Legal interventions often are not subjected to rigorous research evaluation. The research that has been done, moreover, has faced challenges in methodology. There are so many variables that can affect behavior and health status (e.g., differences in informational, physical, social, and cultural environments) that it can be extraordinarily difficult to demonstrate a causal relationship between an intervention and a perceived health effect. Consider the methodologic constraints in identifying the effects of specific drunk-driving laws. Several kinds of laws can be enacted within a short period, so it is difficult to isolate the effect of each law. Publicity about the problem and the legal response can cross state borders, making state comparisons more difficult. Because people who drive under the influence of alcohol also could engage in other risky driving behaviors (e.g., speeding, failing to wear safety belts, running red lights), researchers need to control for changes in other highway safety laws and traffic law enforcement. Subtle differences between comparison communities can have unanticipated effects on the impact of legal interventions ( DeJong and Hingson, 1998 ; Hingson, 1996 ).

Despite such methodologic challenges, social science researchers have studied legal interventions, often with encouraging results. The social science, medical, and behavioral literature contains evaluations of interventions in several public health areas, particularly in relation to injury prevention ( IOM, 1999 ; Rivara et al., 1997a , b ). For example, studies have evaluated the effectiveness of regulations to prevent head injuries (bicycle helmets: Dannenberg et al., 1993 ; Kraus et al., 1994 ; Lund et al., 1991 ; Ni et al., 1997 ; Thompson et al., 1996a , b ), choking and suffocation (refrigerator disposal and warning labels on thin plastic bags: Kraus, 1985 ), child poisoning (childproof packaging: Rogers, 1996 ), and burns (tap water: Erdmann et al., 1991 ). One regulatory measure that has received a great deal of research attention relates to reductions in cigarette-smoking ( Chapter 6 ).

Legal interventions can be an important part of strategies to change behaviors. In considering them, government and other public health agencies face difficult and complex tradeoffs between population health and individual rights (e.g., autonomy, privacy, liberty, property). One example is the controversy over laws that require motorcyclists to wear helmets. Ethical concerns accompany the use of legal interventions to mandate behavior change and must be part of the deliberation process.

  • COST-EFFECTIVENESS EVALUATION

It is not enough to demonstrate that a treatment benefits some patients or community members. The demand for health programs exceeds the resources available to pay for them so that treatments provide clinical benefit and value for money. Investigators, clinicians, and program planners must demonstrate that their interventions constitute a good use of resources.

Well over $ 1 trillion is spent on health care each year in the United States. Current estimates suggest that expenditures on health care exceed $4000 per person ( Health Care Financing Administration, 1998 ). Investments are made in health care to produce good health status for the population, and it is usually assumed that more investment will lead to greater health. Some expenditures in health care produce relatively little benefit; others produce substantial benefits. Cost-effectiveness analysis (CEA) can help guide the use of resources to achieve the greatest improvement in health status for a given expenditure.

Consider the medical interventions in Table 7-4 , all of which are wellknown, generally accepted, and widely used. Some are traditional medical care and some are preventive programs. To emphasize the focus on increasing good health, the table presents the data in units of health bought for $1 million rather than in dollars per unit of health, the usual approach in CEA. The life-year is the most comprehensive unit measure of health. Table 7-4 reveals several important points about resource allocation. There is tremendous variation among the interventions in what can be accomplished for $1 million; which nets 7,750 life-years if used for influenza vaccinations for the elderly, 217 life-years if applied to smoking-cessation programs, but only 2 life-years if used to supply Lovastatin to men aged 35–44 who have high total cholesterol but no heart disease and no other risk factors for heart disease.

TABLE 7-4. Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.

How effectively an intervention contributes to good health depends not only on the intervention, but also on the details of its use. Antihypertensive medication is effective, but Propranolol is more cost-effective than Captopril. Thyroid screening is more cost-effective in women than in men. Lovastatin produces more good health when targeted at older high-risk men than at younger low-risk men. Screening for cervical cancer at 3-year intervals with the Pap smear yields 36 life-years per $1 million (compared with no screening), but each $1 million spent to increase the frequency of screening to 2 years brings only 1 additional life-year.

The numbers in Table 7-4 illustrate a central concept in resource allo-cation: opportunity cost. The true cost of choosing to use a particular intervention or to use it in a particular way is not the monetary cost per se, but the health benefits that could have been achieved if the money had been spent on another service instead. Thus, the opportunity cost of providing annual Pap smears ($1 million) rather than smoking-cessation programs is the 217 life-years that could have been achieved through smoking cessation.

The term cost-effectiveness is commonly used but widely misunderstood. Some people confuse cost-effectiveness with cost minimization. Cost minimization aims to reduce health care costs regardless of health outcomes. CEA does not have cost-reduction per se as a goal but is designed to obtain the most improvement in health for a given expenditure. CEA also is often confused with cost/benefit analysis (CBA), which compares investments with returns. CBA ranks the amount of improved health associated with different expenditures with the aim of identifying the appropriate level of investment. CEA indicates which intervention is preferable given a specific expenditure.

Usually, costs are represented by the net or difference between the total costs of the intervention and the total costs of the alternative to that intervention. Typically, the measure of health is the QALY. The net health effect of the intervention is the difference between the QALYs produced by an intervention and the QALYs produced by an alternative or other comparative base.

Comprehensive as it is, CEA does not include everything that might be relevant to a particular decision—so it should never be used mechanically. Decision-makers can have legitimate reasons to emphasize particular groups, benefits, or costs more heavily than others. Furthermore, some decisions require information that cannot be captured easily in a CEA, such as the effect of an intervention on individual privacy or liberty.

CEA is an analytical framework that arises from the question of which ways of promoting good health—procedures, tests, medications, educational programs, regulations, taxes or subsidies, and combinations and variations of these—provide the most effective use of resources. Specific recommendations about behavioral and psychosocial interventions will contribute the most to good health if they are set in this larger context and based on information that demonstrates that they are in the public interest. However, comparing behavioral and psychosocial interventions with other ways of promoting health on the basis of cost-effectiveness requires additional research. Currently there are too few studies that meet this standard to support such recommendations.

  • DISSEMINATION

A basic assumption underlying intervention research is that tested interventions found to be effective are disseminated to and implemented in clinics, communities, schools, and worksites. However, there is a sizable gap between science and practice ( Anderson, 1998 ; Price, 1989 , 1998 ). Researchers and practitioners need to ensure that an intervention is effective, and that the community or organization is prepared to adopt, implement, disseminate, and institutionalize it. There also is a need for demonstration research (phase V) to explain more about the process of dissemination itself.

Dissemination to Consumers

Biomedical research results are commonly reported in the mass media. Nearly every day people are given information about the risks of disease, the benefits of treatment, and the potential health hazards in their environments. They regularly make health decisions on the basis of their understanding of such information. Some evidence shows that lay people often misinterpret health risk information ( Berger and Hendee, 1989 ; Fischhoff, 1999a ) as do their doctors ( Kalet et al., 1994 ; Kong et al., 1986 ). On the question of such a widely publicized issue as mammography, for example, evidence suggests that women overestimate their risk of getting breast cancer by a factor of at least 20 and that they overestimate the benefits of mammography by a factor of 100 ( Black et al., 1995 ). In a study of 500 female veterans ( Schwartz et al., 1997 ), half the women over-estimated their risk of death from breast cancer by a factor of 8. This did not appear to be because the subjects thought that they were more at risk than other women; only 10% reported that they were at higher risk than the average woman of their age. The topic of communication of health messages to the public is discussed at length in an IOM report, Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations ( IOM, 2001 ).

Communicating Risk Information

Improving communication requires understanding what information the public needs. That necessitates both descriptive and normative analyses, which consider what the public believes and what the public should know, respectively. Juxtaposing normative and descriptive analyses might provide guidance for reducing misunderstanding ( Fischhoff and Downs, 1997 ). Formal normative analysis of decisions involves the creation of decision trees, showing the available options and the probabilities of various outcomes of each, whose relative attractiveness (or aversiveness) must be evaluated by people. Although full analyses of decision problems can be quite complex, they often reveal ways to drastically simplify individuals' decision-making problems—in the sense that they reveal a small number of issues of fact or value that really merit serious attention ( Clemen, 1991 ; Merz et al., 1993 ; Raiffa, 1968 ). Those few issues can still pose significant challenges for decision makers. The actual probabilities can differ from people's subjective probabilities (which govern their behavior). For example, a woman who overestimates the value of a mammogram might insist on tests that are of little benefit to her and mistrust the political/ medical system that seeks to deny such care ( Woloshin et al., 2000 ). Obtaining estimates of subjective probabilities is difficult. Although eliciting probabilities has been studied in other contexts over the past two generations ( von Winterfeldt and Edwards, 1986 ; Yates, 1990 ), it has received much less attention in medical contexts, where it can pose questions that people are unwilling or unable to confront ( Fischhoff and Bruine de Bruin, 1999 ).

In addition to such quantitative beliefs, people often need a qualitative understanding of the processes by which risks are created and controlled. This allows them to get an intuitive feeling for the quantitative estimates, to feel competent to make decisions in their own behalf, to monitor their own experience, and to know when they need help ( Fischhoff, 1999b ; Leventhal and Cameron, 1987 ). Not seeing the world in the same way as scientists do also can lead lay people to misinterpret communications directed at them. One common (and some might argue, essential) strategy for evaluating any public health communication or research instrument is to ask people to think aloud as they answer draft versions of questions ( Ericsson and Simon, 1994 ; Schriver, 1989 ). For example, subjects might be asked about the probability of getting HIV from unprotected sexual activity. Reasons for their assessments might be explored as they elaborate on their impressions and the assumptions they use ( Fischhoff, 1999b ; McIntyre and West, 1992 ). The result should both reveal their intuitive theories and improve the communication process.

When people must evaluate their options, the way in which information is framed can have a substantial effect on how it is used ( Kahneman and Tversky, 1983 ; Schwartz, 1999 ; Tversky and Kahneman, 1988 ). The fairest presentation of risk information might be one in which multiple perspectives are used ( Kahneman and Tversky, 1983 , 1996 ). For example, one common situation involves small risks that add up over the course of time, through repeated exposures. The chances of being injured in an automobile crash are very small for any one outing, whether or not the driver wears a seatbelt. However, driving over a lifetime creates a substantial risk—and a substantial benefit for seatbelt use. One way to communicate that perspective is to do the arithmetic explicitly, so that subjects understand it ( Linville et al., 1993 ). Another method that helps people to understand complex information involves presenting ranges rather than best estimates. Science is uncertain, and it should be helpful for people to understand the intervals within which their risks are likely to fall ( Lipkus and Hollands, 1999 ).

Risk communication can be improved. For example, many members of the public have been fearful that proximity to electromagnetic fields and power lines can increase the risk of cancer. Studies revealed that many people knew very little about properties of electricity. In particular, they usually were unaware that exposure decreases as a function of the cube root of distance from the lines. After studying mental models of this risk, Morgan (1995) developed a tiered brochure that presented the problem at a variety of risks. The brochure addressed common misconceptions and explained why scientists disagree about the risks posed by electromagnetic fields. Participants on each side of the debate reviewed the brochure for fairness. Several hundred thousand copies of the brochure have now been distributed. This approach to communication requires that the public listen to experts, but it also requires that the experts listen to the public. Providing information is not enough; it is necessary to take the next step to demonstrate that the information is presented in an unbiased fashion and that the public accurately processes what is offered ( Edworthy and Adams, 1997 ; Hadden, 1986 ; Morgan et al., 2001 ; National Research Council, 1989 ).

The electromagnetic field brochure is an example of a general approach in cognitive psychology, in which communications are designed to create coherent mental models of the domain being considered ( Ericsson and Simon, 1994 ; Fischhoff, 1999b ; Gentner and Stevens, 1983 ; Johnson-Laird, 1980 ). The bases of these communications are formal models of the domain. In the case of the complex processes creating and controlling risks, the appropriate representation is often an influence diagram, a directed graph that captures the uncertain relationships among the factors involved ( Clemen, 1991 ; Morgan et al., 2001 ). Creating such a diagram requires pooling the knowledge of diverse disciplines, rather than letting each tell its own part of the story. Identifying the critical messages requires considering both the science of the risk and recipients' intuitive conceptualizations.

Presentation of Clinical Research Findings

Research results are commonly misinterpreted. When a study shows that the effect of a treatment is statistically significant, it is often assumed that the treatment works for every patient or at least for a high percentage of those treated. In fact, large experimental trials, often with considerable publicity, promote treatments that have only minor effects in most patients. For example, contemporary care for high blood serum cholesterol has been greatly influenced by results of the Coronary Primary Prevention Trial or CPPT Lipid Research Clinics Program, 1984 , in which men were randomly assigned to take a placebo or cholestyramine. Cholestyramine can significantly lower serum cholesterol and, in this trial, reduced it by an average of 8.5%. Men in the treatment group experienced 24% fewer heart attack deaths and 19% fewer heart attacks than did men who took the placebo.

The CPPT showed a 24% reduction in cardiovascular mortality in the treated group. However, the absolute proportions of patients who died of cardiovascular disease were similar in the 2 groups: there were 38 deaths among 1900 participants (2%) in the placebo group and 30 deaths among 1906 participants (1.6%) in the cholestyramine group. In other words, taking the medication for 6 years reduced the chance of dying from cardiovascular disease from 2% to 1.6%.

Because of the difficulties in communicating risk ratio information, the use of simple statistics, such as the number needed to treat (NNT), has been suggested ( Sackett et al., 1997 ). NNT is the number of people that must be treated to avoid one bad outcome. Statistically, NNT is defined as the reciprocal of the absolute-risk reduction. In the cholesterol example, if 2% (0.020) of the patients died in the control arm of an experiment and 1.6% (0.016) died in the experimental arm, the absolute risk reduction is 0.020–0.016=0.004. The reciprocal of 0.004 is 250. In this case, 250 people would have to be treated for 6 years to avoid 1 death from coronary heart disease. Treatments can harm as well as benefit, so in addition to calculating the NNT, it is valuable to calculate the number needed to harm (NNH). This is the number of people a clinician would need to treat to produce one adverse event. NNT and NNH can be modified for those in particular risk groups. The advantage of these simple numbers is that they allow much clearer communication of the magnitude of treatment effectiveness.

Shared Decision Making

Once patients understand the complex information about outcomes, they can fully participate in the decision-making process. The final step in disseminating information to patients involves an interactive process that allows patients to make informed choices about their own health-care.

Despite a growing consensus that they should be involved, evidence suggests that patients are rarely consulted. Wennberg (1995) outlined a variety of common medical decisions in which there is uncertainty. In each, treatment selection involves profiles of risks and benefits for patients. Thiazide medications can be effective at controlling blood pressure, they also can be associated with increased serum cholesterol; the benefit of blood pressure reduction must be balanced against such side effects as dizziness and impotence.

Factors that affect patient decision making and use of health services are not well understood. It is usually assumed that use of medical services is driven primarily by need, that those who are sickest or most disabled use services the most ( Aday, 1998 ). Although illness is clearly the major reason for service use, the literature on small-area variation demonstrates that there can be substantial variability in service use among communities that have comparable illness burdens and comparable insurance coverage ( Wennberg, 1998 ). Therefore, social, cultural, and system variables also contribute to service use.

The role of patients in medical decision making has undergone substantial recent change. In the early 1950s, Parsons (1951) suggested that patients were excluded from medical decision making unless they assumed the “sick role,” in which patients submit to a physician's judgment, and it is assumed that physicians understand the patients' preferences. Through a variety of changes, patients have become more active. More information is now available, and many patients demand a greater role ( Sharf, 1997 ). The Internet offers vast amounts of information to patients; some of it misleading or inaccurate ( Impicciatore et al., 1997 ). One difficulty is that many patients are not sophisticated consumers of technical medical information ( Strum, 1997 ).

Another important issue is whether patients want a role. The literature is contradictory on this point; at least eight studies have addressed the issue. Several suggest that most patients express little interest in participating ( Cassileth et al., 1980 ; Ende et al., 1989 ; Mazur and Hickam, 1997 ; Pendleton and House, 1984 ; Strull et al., 1984 ; Waterworth and Luker, 1990 ). Those studies challenge the basis of shared medical decision making. Is it realistic to engage patients in the process if they are not interested? Deber ( Deber, 1994 ; Deber et al., 1996 ) has drawn an important distinction between problem solving and decision making. Medical problem solving requires technical skill to make an appropriate diagnosis and select treatment. Most patients prefer to leave those judgments in the hands of experts ( Ende et al., 1989 ). Studies challenging the notion that patients want to make decisions typically asked questions about problem solving ( Ende et al., 1989 ; Pendleton and House, 1984 ; Strull et al., 1984 ).

Shared decision making requires patients to express personal preferences for desired outcomes, and many decisions involve very personal choices. Wennberg (1998) offers examples of variation in health care practices that are dominated by physician choice. One is the choice between mastectomy and lumpectomy for women with well-defined breast cancer. Systematic clinical trials have shown that the probability of surviving breast cancer is about equal after mastectomy and after lumpectomy followed by radiation ( Lichter et al., 1992 ). But in some areas of the United States, nearly half of women with breast cancer have mastectomies (for example, Provo, Utah); in other areas less than 2% do (for example, New Jersey; Wennberg, 1998 ). Such differences are determined largely by surgeon choice; patient preference is not considered. In the breast cancer example, interviews suggest that some women have a high preference for maintaining the breast, and others feel more comfortable having more breast tissue removed. The choices are highly personal and reflect variations in comfort with the idea of life with and without a breast. Patients might not want to engage in technical medical problem solving, but they are the only source of information about preferences for potential outcomes.

The process by which patients exercise choice can be difficult. There have been several evaluations of efforts to involve patients in decision making. Greenfield and colleagues (1985) taught patients how to read their own medical records and offered coaching on what questions to ask during encounters with physicians. In this randomized trial involving patients with peptic ulcer disease, those assigned to a 20-minute treatment had fewer functional limitations and were more satisfied with their care than were patients in the control group. A similar experiment involving patients treated for diabetes showed that patients randomly assigned to receive visit preparation scored significantly better than controls on three dimensions of health-related quality of life (mobility, role performance, physical activity). Furthermore, there were significant improvements for biochemical measures of diabetes control ( Greenfield et al., 1988 ).

Many medical decisions are more complex than those studied by Greenfield and colleagues. There are usually several treatment alternatives, and the outcomes for each choice are uncertain. Also, the importance of the outcomes might be valued differently by different people. Shared decision-making programs have been proposed to address those concerns ( Kasper et al., 1992 ). The programs usually use electronic media. Some involve interactive technologies in which a patient becomes familiar with the probabilities of various outcomes. With some technologies, the patient also has the opportunity to witness others who have embarked on different treatments. Video allows a patient to witness the outcomes of others who have made each treatment choice. A variety of interactive programs have been systematically evaluated. In one study ( Barry et al., 1995 ), patients with benign prostatic hyperplasia were given the opportunity to use an interactive video. The video was generally well received, and the authors reported that there was a significant reduction in the rate of surgery and an increase in the proportion who chose “watchful waiting” after using the decision aid. Flood et al. (1996) reported similar results with an interactive program.

Not all evaluations of decision aids have been positive. In one evaluation of an impartial video for patients with ischemic heart disease, ( Liao et al., 1996 ) 44% of the patients found it helpful for making treatment choices but more than 40% reported that it increased their anxiety ( Liao et al., 1996 ). Most of the patients had received advice from their physicians before watching the video.

Despite enthusiasm for shared medical decision making, little systematic research has evaluated interventions to promote it ( Frosch and Kaplan, 1999 ). Systematic experimental trials are needed to determine whether the use of shared decision aids enhances patient outcomes. Although decision aids appear to enhance patient satisfaction, it is unclear whether they result in reductions in surgery, as suggested by Wennberg (1998) , or in improved patient outcomes ( Frosch and Kaplan, 1999 ).

Dissemination Through Organizations

The effect of any preventive intervention depends both on its ability to influence health behavior change or reduce health risks and on the extent to which the target population has access to and participates in the program. Few preventive interventions are free-standing in the community. Rather, organizations serve as “hosts” for health promotion and disease prevention programs. Once a program has proven successful in demonstration projects and efficacy trials, it must be adopted and implemented by new organizations. Unfortunately, diffusion to new organizations often proceeds very slowly ( Murray, 1986 ; Parcel et al., 1990 ).

A staged change process has been proposed for optimal diffusion of preventive interventions to new organizations. Although different researchers have offered a variety of approaches, there is consensus on the importance of at least four stages ( Goodman et al., 1997 ):

  • dissemination, during which organizations are made aware of the programs and their benefits;
  • adoption, during which the organization commits to initiating the program;
  • implementation, during which the organization offers the program or services;
  • maintenance or institutionalization, during which the organization makes the program part of its routines and standard offerings.

Research investigating the diffusion of health behavior change programs to new organizations can be seen, for example, in adoption of prevention curricula by schools and of preventive services by medical care practices.

Schools are important because they allow consistent contact with children over their developmental trajectory and they provide a place where acquisition of new information and skills is normative ( Orlandi, 1996b ). Although much emphasis has been placed on developing effective health behavior change curricula for students throughout their school years, the literature is replete with evaluations of school-based curricula that suggest that such programs have been less than successful ( Bush et al., 1989 ; Parcel et al., 1990 ; Rohrbach et al., 1996 ; Walter, 1989 ). Challenges or barriers to effective diffusion of the programs include organizational issues, such as limited time and resources, few incentives for the organization to give priority to health issues, pressure to focus on academic curricula to improve student performance on proficiency tests, and unclear role delineation in terms of responsibility for the program; extra-organizational issues or “environmental turbulence,” such as restructuring of schools, changing school schedules or enrollments, uncertainties in public funding; and characteristics of the programs that make them incompatible with the potential host organizations, such as being too long, costly, and complex ( Rohrbach et al., 1996 ; Smith et al., 1995 ).

Initial or traditional efforts to enhance diffusion focused on the characteristics of the intervention program, but more recent studies have focused on the change process itself Two NCI-funded studies to diffuse tobacco prevention programs throughout schools in North Carolina and Texas targeted the four stages of change and were evaluated through randomized, controlled trials ( Goodman et al., 1997 ; Parcel et al., 1989 , 1995 ; Smith et al., 1995 ; Steckler et al., 1992 ). Teacher-training interventions appeared to enhance the likelihood of implementation in each study (an effect that has been replicated in other investigations; see Perry et al., 1990 ). However, other strategies (e.g., process consultation, newsletters, self-paced instructional video) were less successful at enhancing adoption and institutionalization. None of the strategies attempted to change the organizing arrangements (such as reward systems or role responsibilities) of the school districts to support continued implementation of the program.

These results suggest that further reliance on organizational change theory might greatly enhance the diffusion of programs more rapidly and thoroughly. For example, Rohrbach et al. (1996 , pp. 927–928) suggest that “change agents and school personnel should work as a team to diagnose any problems that may impede program implementation and develop action plans to address them [and that]…change agents need to promote the involvement of teachers, as well as that of key administrators, in decisions about program adoption and implementation.” These suggestions are clearly consistent with an organizational development approach. Goodman and colleagues (1997) suggest that the North Carolina intervention might have been more effective had it included more participative problem diagnosis and action planning, and had consultation been less directive and more oriented toward increasing the fit between the host organization and the program.

Medical Practices

Primary care medical practices have long been regarded as organizational settings that provide opportunities for health behavior interventions. With the growth of managed care and its financial incentives for prevention, these opportunities are even greater ( Gordon et al., 1996 ). Much effort has been invested in the development of effective programs and processes for clinical practices to accomplish health behavior change. However, the diffusion of such programs to medical practices has been slow (e.g., Anderson and May, 1995 ; Lewis, 1988 ).

Most systemic programs encourage physicians, nurses, health educators, and other members of the health-professional team to provide more consistent change-related statements and behavioral support for health-enhancing behaviors in patients ( Chapter 5 ). There might be fundamental aspects of a medical practice that support or inhibit efforts to improve health-related patient behavior ( Walsh and McPhee, 1992 ). Visual reminders to stay up-to-date on immunizations, to stop smoking cigarettes, to use bicycle helmets, and to eat a healthy diet are examples of systemic support for patient activation and self-care ( Lando et al., 1995 ). Internet support for improved self-management of diabetes has shown promise ( McKay et al., 1998 ). Automated chart reminders to ask about smoking status, update immunizations, and ensure timely cancer-screening examinations—such as Pap smears, mammography, and prostate screening—are systematic practice-based improvements that increase the rate of success in reaching stated goals on health process and health behavior measures ( Cummings et al., 1997 ). Prescription forms for specific telephone callback support can enhance access to telephone-based counseling for weight loss, smoking cessation, and exercise and can make such behavioral teaching and counseling more accessible ( Pronk and O'Connor, 1997 ). Those and other structural characteristics of clinical practices are being used and evaluated as systematic practice-based changes that can improve treatment for, and prevention of, various chronic illnesses ( O'Connor et al., 1998 ).

Barriers to diffusion include physician factors, such as lack of training, lack of time, and lack of confidence in one's prevention skills; health-care system factors, such as lack of health-care coverage and inadequate reimbursement for preventive services in fee-for-service systems; and office organization factors, such as inflexible office routines, lack of reminder systems, and unclear assignment of role responsibilities ( Thompson et al., 1995 ; Wagner et al., 1996 ).

The capitated financing of many managed-care organizations greatly reduces system barriers. Interventions that have focused solely on physician knowledge and behavior have not been very effective. Interventions that also addressed office organization factors have been more effective ( Solberg et al., 1998b ; Thompson et al., 1995 ). For example, the diffusion of the Put Prevention Into Practice (PPIP) program ( Griffith et al., 1995 ), a comprehensive federal effort, was recommended by the U.S. Preventive Services Task Force and is distributed by federal agencies and through professional associations. Using a case study approach, McVea and colleagues (1996) studied the implementation of the program in family practice settings. They found that PPIP was “used not at all or only sporadically by the practices that had ordered the kit” (p. 363). The authors suggested that the practices that provided selected preventive services did not adopt the PPIP because they did not have the organizational skills and resources to incorporate the prevention systems into their office routines without external assistance.

Descriptive research clearly indicates a need for well-conceived and methodologically-rigorous diffusion research. Many of the barriers to more rapid and effective diffusion are clearly “systems problems” ( Solberg et al., 1998b ). Thus, even though the results are somewhat mixed, recent work applying systems approaches and organizational development strategies to the diffusion dilemma is encouraging. In particular, the emphasis on building internal capacity for diffusion of the preventive interventions—for example, continuous quality improvement teams ( Solberg et al., 1998a ) and the identification and training of “program champions” within the adopting systems ( Smith et al., 1995 )—seems crucial for institutionalization of the programs.

Dissemination to Community-Based Groups

This section examines three aspects of dissemination: the need for dissemination of effective community interventions, community readiness for interventions, and the role of dissemination research.

Dissemination of Effective Community Interventions

Dissemination requires the identification of core and adaptive elements of an intervention ( Pentz et al., 1990 ; Pentz and Trebow, 1997 ; Price, 1989 ). Core elements are features of an intervention program or policy that must be replicated to maintain the integrity of the interventions as they are transferred to new settings. They include theoretically based behavior change strategies, targeting of multiple levels of influence, and the involvement of empowered community leaders ( Florin and Wandersman, 1990 ; Pentz, 1998 ). Practitioners need training in specific strategies for the transfer of core elements ( Bero et al., 1998 ; Orlandi, 1986 ). In addition, the amount of intervention delivered and its reach into the targeted population might have to be unaltered to replicate behavior change in a new setting. Research has not established a quantitative “dose” of intervention or a quantitative guide for the percentage of core elements that must be implemented to achieve behavior change. Process evaluation can provide guidance regarding the desired intensity and fidelity to intervention protocol. Botvin and colleagues (1995) , for example, found that at least half the prevention program sessions needed to be delivered to achieve the targeted effects in a youth drug abuse prevention program. They also found that increased prevention effects were associated with fidelity to the intervention protocol, which included standardized training of those implementing the program, implementation within 2 weeks of that training, and delivery of at least two program sessions or activities per week ( Botvin et al., 1995 ).

Adaptive elements are features of an intervention that can be tailored to local community, organizational, social, and economic realities of a new setting without diluting the effectiveness of the intervention ( Price, 1989 ). Adaptations might include timing and scheduling or culturally meaningful themes through which the educational and behavior change strategies are delivered.

Community and Organizational Readiness

Community and organizational factors might facilitate or hinder the adoption, implementation, and maintenance of innovative interventions. Diffusion theory assumes that the unique characteristics of the adopter (such as community, school, or worksite) interact with the specific attributes of the innovation (risk factor targets) to determine whether and when an innovation is adopted and implemented ( Emmons et al., 2000 ; Rogers, 1983 , 1995 ). Rogers (1983 , 1995) has identified characteristics that predict the adoption of innovations in communities and organizations. For example, an innovation that has a relative advantage over the idea or activity that it supersedes is more likely to be adopted. In the case of health promotion, organizations might see smoke-free worksites as having a relative advantage not only for employee health, but also for the reduction of absenteeism. An innovation that is seen as compatible with adopters' sociocultural values and beliefs, with previously introduced ideas, or with adopters' perceived needs for innovation is more likely to be implemented. The less complex, and clearer the innovation, the more likely it is to be adopted. For example, potential adopters are more likely to change their health behaviors when educators provide clear specification of the skills needed to change the behaviors. Trialability is the degree to which an innovation can be experimented with on a limited basis. In nutrition education, adopters are more likely to prepare low-fat recipes at home if they have an opportunity to taste the results in a class or supermarket and are given clear, simple directions for preparing them. Finally, observability is the degree to which the results of an innovation are visible to others. In health behavior change, an example of observability might be attention given to a health promotion program by the popular press ( Pentz, 1998 ; Rogers, 1983 ).

Dissemination Research

The ability to identify effective interventions and explain the characteristics of communities and organizations that support dissemination of those interventions provides the basic building blocks for dissemination. It is necessary, however, to learn more about how dissemination occurs to increase its effectiveness ( Pentz, 1998 ). What are the core elements of interventions, and how can they be adapted ( Price, 1989 )? How do the predictors of diffusion function in the dissemination process ( Pentz, 1998 )? What characteristics of community leaders are associated with dissemination of prevention programs? What personnel and material resources are needed to implement and maintain prevention programs? How can written materials and training in program implementation be provided to preserve fidelity to core elements ( Price, 1989 )?

Dissemination research could help identify alternatives to conceptualizing transfer of intervention technology from research to the practice setting. Rather than disseminating an exact replication of specific tested interventions, program transfer might be based on core and adaptive intervention components at both the individual and community organizational levels ( Blaine et al., 1997 ; Perry 1999 ). Dissemination might also be viewed as replicating a community-based participatory research process, or as a planning process that incorporates core components ( Perry 1999 ), rather than exact duplication of all aspects of intervention activities.

The principles of community-based participatory research presented here could be operationalized and used as criteria for examining the extent to which these dimensions were disseminated to other projects. The guidelines developed by Green and colleagues (1995) for classifying participatory research projects also could be used. Similarly, based on her research and experience with children and adolescents in school health behavior change programs, Perry (1999) developed a guidebook that outlines a 10-step process for developing communitywide health behavior programs for children and adolescents.

Facilitating Interorganizational Linkages

To address complex health issues effectively, organizations increasingly form links with one another to form either dyadic connections (pairs) or networks ( Alter and Hage, 1992 ). The potential benefits of these interorganizational collaborations include access to new information, ideas, materials, and skills; minimization of duplication of effort and services; shared responsibility for complex or controversial programs; increased power and influence through joint action; and increased options for intervention (e.g., one organization might not experience the political constraints that hamper the activities of another; Butterfoss et al., 1993 ). However, interorganizational linkages have costs. Time and resources must be devoted to the formation and maintenance of relationships. Negotiating the assessment and planning processes can take a longer time. And sometimes an organization can find that the policies and procedures of other organizations are incompatible with its own ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ).

One way a dyadic linkage between organizations can serve health-promoting goals grows out of the diffusion of innovations through organizations. An organization can serve as a “linking agent” ( Monahan and Scheirer, 1988 ), facilitating the adoption of a health innovation by organizations that are potential implementors. For example, the National Institute for Dental Research (NIDR) developed a school-based program to encourage children to use a fluoride mouth rinse to prevent caries. Rather than marketing the program directly to the schools, NIDR worked with state agencies to promote the program. In a national study, Monahan and Scheirer (1988) found that when state agencies devoted more staff to the program and located a moderate proportion of their staff in regional offices (rather than in a central office) there was likely to be a larger proportion of school districts implementing the program. Other programs, such as the Heart Partners program of the American Heart Association ( Roberts-Gray et al., 1998 ), have used the concept of linking agents to diffuse preventive interventions. Studies of these approaches attempt to identify the organizational policies, procedures, and priorities that permit the linking agent to successfully reach a large proportion of the organizations that might implement the health behavior program. However, the research in this area does not allow general conclusions or guidelines to be drawn.

Interorganizational networks are commonly used in community-wide health initiatives. Such networks might be composed of similar organizations that coordinate service delivery (often called consortia) or organizations from different sectors that bring their respective resources and expertise to bear on a complex health problem (often called coalitions). Multihospital systems or linkages among managed-care organizations and local health departments for treating sexually transmitted diseases ( Rutherford, 1998 ) are examples of consortia. The interorganizational networks used in Project ASSIST and COMMIT, major NCI initiatives to reduce the prevalence of smoking, are examples of coalitions ( U.S. Department of Health and Human Services, 1990 ).

Stage theory has been applied to the formation and performance of interorganizational networks ( Alter and Hage, 1992 ; Goodman and Wandersman, 1994 ). Various authors have posited somewhat different stages of development, but they all include: initial actions, to form the coalition; the formalization of the mission, structure, and processes of the coalition; planning, development, and implementation of programmatic activities; and accomplishment of the coalition's health goals. Stage theory suggests that different strategies are likely to facilitate success at different stages of development ( Lewin, 1951 ; Schein, 1987 ). The complexity, formalization, staffing patterns, communication and decision-making patterns, and leadership styles of the interorganizational network will affect its ability to progress toward its goals ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ; Kegler et al., 1998a , b ).

In 1993, Butterfoss and colleagues reviewed the literature on community coalitions and found “relatively little empirical evidence” (p. 315) to bring to bear on the assessment of their effectiveness. Although the use of coalitions in community-wide health promotion continues, the accumulation of evidence supporting their effectiveness is still slim. Several case studies suggest that coalitions and consortia can be successful in bringing about changes in health behaviors, health systems, and health status (e.g., Butterfoss et al., 1998 ; Fawcett et al., 1997 ; Kass and Freudenberg, 1997 ; Myers et al., 1994 ; Plough and Olafson, 1994 ). However, the conditions under which coalitions are most likely to thrive and the strategies and processes that are most likely to result in effective functioning of a coalition have not been consistently identified empirically.

Evaluation models, such as the FORECAST model ( Goodman and Wandersman, 1994 ) and the model proposed by the Work Group on Health Promotion and Community Development at the University of Kansas ( Fawcett et al., 1997 ), address the lack of systematic and rigorous evaluation of coalitions. These models provide strategies and tools for assessing coalition functioning at all stages of development, from initial formation to ultimate influence on the coalition's health goals and objectives. They are predicated on the assumption that the successful passage through each stage is necessary, but not sufficient, to ensure successful passage through the next stage. Widespread use of these and other evaluation frameworks and tools can increase the number and quality of the empirical studies of the effects of interorganizational linkages.

Orlandi (1996a) states that diffusion failures often result from a lack of fit between the proposed host organization and the intervention program. Thus, he suggests that if the purpose is to diffuse an existing program, the design of the program and the process of diffusion need to be flexible enough to adapt to the needs and resources of the organization. If the purpose is to develop and disseminate a new program, innovation development and transfer process should be integrated. Those conclusions are consistent with some of the studies reviewed above. For example, McVea et al. (1996) concluded that a “one size fits all” approach to clinical preventive systems was not likely to diffuse effectively.

  • Aday LA. Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity. Chicago: Health Administration Press; 1998.
  • Alter C, Hage J. Organisations Working Together. Newbury Park, CA: Sage; 1992.
  • Altman DG. Sustaining interventions in community systems: On the relationship between researchers and communities. Health Psychology. 1995; 14 :526–536. [ PubMed : 8565927 ]
  • Anderson LM, May DS. Has the use of cervical, breast, and colorectal cancer screening increased in the United States? American Journal of Public Health. 1995; 85 :840–842. [ PMC free article : PMC1615482 ] [ PubMed : 7762721 ]
  • Anderson NB.After the discoveries, then what? A new approach to advancing evidence-based prevention practice. Programs and abstracts from NIH Conference, Preventive Intervention Research at the Crossroads; Bethesda, MD. 1998. pp. 74–75.
  • Anderson NH, Zalinski J. Functional measurement approach to self-estimation in multiattribute evaluation. In: Anderson NH, editor. Contributions to Information Integration Theory, Vol. 1: Cognition; Vol. 2: Social; Vol. 3: Developmental. Hillsdale, NJ: Erlbaum Press; 1990. pp. 145–185.
  • Antonovsky A. The life cycle, mental health and the sense of coherence. Israel Journal of Psychiatry and Related Sciences. 1985; 22 (4):273–280. [ PubMed : 3836223 ]
  • Baker EA, Brownson CA. Defining characteristics of community-based health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community -Based Prevention Programs that Work. Gaithersburg, MD: Aspen; 1999. pp. 7–19.
  • Balestra DJ, Littenberg B. Should adult tetanus immunization be given as a single vaccination at age 65? A cost-effectiveness analysis. Journal of General Internal Medicine. 1993; 8 :405–412. [ PubMed : 8410405 ]
  • Barry MJ, Fowler FJ, Mulley AG, Henderson JV, Wennberg JE. Patient reactions to a program designed to facilitate patient participation in treatment decisions for benign prostatic hyperplasia. Medical Care. 1995; 33 :771–782. [ PubMed : 7543639 ]
  • Beery B, Nelson G. Making outcomes matter. Seattle: Group Health/Kaiser Permanente Community Foundation; 1998. Evaluating community-based health initiatives: Dilemmas, puzzles, innovations and promising directions.
  • Bennett KJ, Torrance GW. Measuring health preferences and utilities: Rating scale, time trade-off and standard gamble methods. In: Spliker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadelphia: Lippincott-Raven; 1996. pp. 235–265.
  • Berger ES, Hendee WR. The expression of health risk information. Archives of Internal Medicine. 1989; 149 :1507–1508. [ PubMed : 2742423 ]
  • Berger PL, Neuhaus RJ. To empower people: The role of mediating structures in public policy. Washington, DC: American Enterprise Institute for Public Policy Research; 1977.
  • Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal. 1998; 317 :465–468. [ PMC free article : PMC1113716 ] [ PubMed : 9703533 ]
  • Bickman L. The functions of program theory. New Directions in Program Evaluation. 1987; 33 :5–18.
  • Bigger JTJ. Antiarrhythmic treatment: An overview. American Journal of Cardiology. 1984; 53 :8B–16B. [ PubMed : 6364771 ]
  • Bishop R. Initiating empowering research? New Zealand Journal of Educational Studies. 1994; 29 :175–188.
  • Bishop R. Addressing issues of self-determination and legitimation in Kaupapa Maori research. In: Webber B, editor. Research Perspectives in Maori Education. Wellington, New Zealand: Council for Educational Research; 1996. pp. 143–160.
  • Black WC, Nease RFJ, Tosteson AN. Perceptions of breast cancer risk and screening effectiveness in women younger than 50 years of age. Journal of the National Cancer Institute. 1995; 87 :720–731. [ PubMed : 7563149 ]
  • Blaine TM, Forster JL, Hennrikus D, O'Neil S, Wolfson M, Pham H. Creating tobacco control policy at the local level: Implementation of a direct action organizing approach. Health Education and Behavior. 1997; 24 :640–651. [ PubMed : 9307899 ]
  • Botvin GJ, Baker E, Dusenbury L, Botvin EM, Diaz T. Long-term followup results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association. 1995; 273 :1106–1112. [ PubMed : 7707598 ]
  • Brown ER. Community action for health promotion: A strategy to empower individuals and communities. International Journal of Health Services. 1991; 21 :441–456. [ PubMed : 1917205 ]
  • Brown P. The role of the evaluator in comprehensive community initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 201–225.
  • Bush PJ, Zuckerman AE, Taggart VS, Theiss PK, Peleg EO, Smith SA. Cardiovascular risk factor prevention in black school children: The Know Your Body: Evaluation Project. Health Education Quarterly. 1989; 16 :215–228. [ PubMed : 2732064 ]
  • Butterfoss FD, Morrow AL, Rosenthal J, Dini E, Crews RC, Webster JD, Louis P. CINCH: An urban coalition for empowerment and action. Health Education and Behavior. 1998; 25 :212–225. [ PubMed : 9548061 ]
  • Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion. Health Education Research. 1993; 8 :315–330. [ PubMed : 10146473 ]
  • Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally; 1963.
  • Cardiac Arrhythmia Suppression Trial (CAST) Investigators. Preliminary report: Effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. New England Journal of Medicine. 1989; 321 :406–412. [ PubMed : 2473403 ]
  • Cassileth BR, Zupkis RV, Sutton-Smith K, March V. Information and participation preferences among cancer patients. Annals of Internal Medicine. 1980; 92 :832–836. [ PubMed : 7387025 ]
  • Centers for Disease Control, Agency for Toxic Substances and Disease Registry (CDC/ ATSDR). Principles of Community Engagement. Atlanta: CDC Public Health Practice Program Office; 1997.
  • Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998; 66 :7–18. [ PubMed : 9489259 ]
  • Clemen RT. Making Hard Decisions. Boston: PWS-Kent; 1991.
  • Compas BE, Haaga DF, Keefe FJ, Leitenberg H, Williams DA. Sampling of empirically supported psychological treatments from health psychology: Smoking, chronic pain, cancer, and bulimia nervosa. Journal of Consulting and Clinical Psychology. 1998; 66 :89–112. [ PubMed : 9489263 ]
  • Cook TD, Reichardt CS. Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage; 1979.
  • Cornwall A. Towards participatory practice: Participatory rural appraisal (PRA) and the participatory process. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 94–107.
  • Cornwall A, Jewkes R. What is participatory research? Social Science and Medicine. 1995; 41 :1667–1676. [ PubMed : 8746866 ]
  • Cousins JB, Earl LM, editors. Participatory Evaluation: Studies in Evaluation Use and Organizational Learning. London: Falmer; 1995.
  • Cromwell J, Bartosch WJ, Fiore MC, Hasselblad V, Baker T. Cost-effectiveness of the clinical practice recommendations in the AHCPR guideline for smoking cessation. Journal of the American Medical Association. 1997; 278 :1759–1766. [ PubMed : 9388153 ]
  • Cummings NA, Cummings JL, Johnson JN, editors. Behavioral Health in Primary Care: A Guide for Clinical Integration. Madison, CT: Psychosocial Press; 1997.
  • Danese MD, Powe NR, Sawin CT, Ladenson PW. Screening for mild thyroid failure at the periodic health examination: A decision and cost-effectiveness analysis. Journal of the American Medical Association. 1996; 276 :285–292. [ PubMed : 8656540 ]
  • Dannenberg AL, Gielen AC, Beilenson PL, Wilson MH, Joffe A. Bicycle helmet laws and educational campaigns: An evaluation of strategies to increase children's helmet use. American Journal of Public Health. 1993; 83 :667–674. [ PMC free article : PMC1694700 ] [ PubMed : 8484446 ]
  • Deber RB. Physicians in health care management. 7. The patient-physician partnership: Changing roles and the desire for information. Canadian Medical Association Journal. 1994; 151 :171–176. [ PMC free article : PMC1336877 ] [ PubMed : 8039062 ]
  • Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Archives of Internal Medicine. 1996; 156 :1414–1420. [ PubMed : 8678709 ]
  • DeJong W, Hingson R. Strategies to reduce driving under the influence of alcohol. Annual Review of Public Health. 1998; 19 :359–378. [ PubMed : 9611624 ]
  • deKoning K, Martin M. Participatory research in health: Setting the context. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 1–18.
  • Denzin NK. The research act. In: Denzin NK, editor. The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Chicago, IL: Aldine; 1970. pp. 345–360.
  • Denzin NK. The suicide machine. In: Long RE, editor. Suicide. 2. Vol. 67. New York: H.W. Wilson; 1994.
  • Dignan MB, editor. Measurement and evaluation of health education. Springfield, IL: C.C. Thomas; 1989.
  • Dockery G. Rhetoric or reality? Participatory research in the National Health Service, UK. In: deKoning K, Martin M, editors. Participatory Research in Health: Issues and Experiences. London: Zed Books; 1996. pp. 164–176.
  • Donaldson SI, Graham JW, Hansen WB. Testing the generalizability of intervening mechanism theories: Understanding the effects of adloescent drug use prevention interventions. Journal of Behavioral Medicine. 1994; 17 :195–216. [ PubMed : 8035452 ]
  • Dressler WW. Commentary on “Community Research: Partnership in Black Communities.” American Journal of Preventive Medicine. 1993; 9 :32–34. [ PubMed : 8123284 ]
  • Durie MH.Characteristics of Maori health research. Presented at Hui Whakapiripiri: A Hui to Discuss Strategic Directions for Maori Health Research; Wellington, New Zealand: Eru Pomare Maori Health Research Centre, Wellington School of Medicine, University of Otago; 1996.
  • Eddy DM. Screening for cervical cancer. Annals of Internal Medicine. 1990; 113 :214–226. Reprinted in Eddy, D.M. (1991). Common Screening Tests. Philadelphia: American College of Physicians. [ PubMed : 2115753 ]
  • Edelson JT, Weinstein MC, Tosteson ANA, Williams L, Lee TH, Goldman L. Long-term cost-effectiveness of various initial monotherapies for mild to moderate hypertension. Journal of the American Medical Association. 1990; 263 :407–413. [ PubMed : 2136759 ]
  • Edworthy J, Adams AS. Warning Design. London: Taylor and Francis; 1997.
  • Elden M, Levin M. Cogenerative learning. In: Whyte WF, editor. Participatory Action Research. Newbury Park, CA: Sage; 1991. pp. 127–142.
  • Emmons KM, Thompson B, Sorensen G, Linnan L, Basen-Engquist K, Biener L, Watson M. The relationship between organizational characteristics and the adoption of workplace smoking policies. Health Education and Behavior. 2000; 27 :483–501. [ PubMed : 10929755 ]
  • Ende J, Kazis L, Ash A, Moskowitz MA. Measuring patients' desire for autonomy: Decision making and information-seeking preferences among medical patients. Journal of General Internal Medicine. 1989; 4 :23–30. [ PubMed : 2644407 ]
  • Eng E, Blanchard L. Action-oriented community diagnosis: A health education tool. International Quarterly of Community Health Education. 1990–1; 11 :93–110. [ PubMed : 20840941 ]
  • Eng E, Parker EA. Measuring community competence in the Mississippi Delta: the interface between program evaluation and empowerment. Health Education Quarterly. 1994; 21 :199–220. [ PubMed : 8021148 ]
  • Erdmann TC, Feldman KW, Rivara FP, Heimbach DM, Wall HA. Tap water burn prevention: The effect of legislation. Pediatrics. 1991; 88 :572–577. [ PubMed : 1881739 ]
  • Ericsson A, Simon HA. Verbal Protocol As Data. Cambridge, MA: MIT Press; 1994.
  • Fawcett SB, Lewis RK, Paine-Andrews A, Francisco VT, Richter KP, Williams EL, Copple B. Evaluating community coalitions for prevention of substance abuse: The case of Project Freedom. Health Education and Behavior. 1997; 24 :812–828. [ PubMed : 9408793 ]
  • Fawcett SB. Some values guiding community research and action. Journal of Applied Behavior Analysis. 1991; 24 :621–636. [ PMC free article : PMC1279615 ] [ PubMed : 16795759 ]
  • Fawcett SB, Paine-Andrews A, Francisco VT, Schultz JA, Richter KP, Lewis RK, Harris KJ, Williams EL, Berkley JY, Lopez CM, Fisher JL. Empowering community health initiatives through evaluation. In: Fetterman D, Kaftarian S, Wandersman A, editors. Empowerment Evaluation: Knowledge And Tools Of Self-Assessment And Accountability. Thousand Oaks, CA: Sage; 1996. pp. 161–187.
  • Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” American Journal of Medicine. 1997; 103 :529–535. [ PubMed : 9428837 ]
  • Fischhoff B.Risk Perception And Risk Communication. Presented at the Workshop on Health, Communications and Behavior of the IOM Committee on Health and Behavior: Research, Practice and Policy; Irvine, CA. 1999a.
  • Fischhoff B. Why (cancer) risk communication can be hard. Journal of the National Cancer Institute Monographs. 1999b; 25 :7–13. [ PubMed : 10854449 ]
  • Fischhoff B, Bruine de Bruin W. Fifty/fifty=50? Journal of Behavioral Decision Making. 1999; 12 :149–163.
  • Fischhoff B, Downs J. Accentuate the relevant. Psychological Science. 1997; 18 :154–158.
  • Fisher EB Jr. The results of the COMMIT trial. American Journal of Public Health. 1995; 85 :159–160. [ PMC free article : PMC1615304 ] [ PubMed : 7856770 ]
  • Flay B. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986; 15 :451–474. [ PubMed : 3534875 ]
  • Flood AB, Wennberg JE, Nease RFJ, Fowler FJJ, Ding J, Hynes LM. The importance of patient preference in the decision to screen for prostate cancer. Prostate Patient Outcomes Research Team [see comments] Journal of General Internal Medicine. 1996; 11 :342–349. [ PubMed : 8803740 ]
  • Florin P, Wandersman A. An introduction to citizen participation, voluntary organizations, and community development: Insights for empowerment through research. American Journal of Community Psychology. 1990; 18 :41–53.
  • Francisco VT, Paine AL, Fawcett SB. A methodology for monitoring and evaluating community health coalitions. Health Education Research. 1993; 8 :403–416. [ PubMed : 10146477 ]
  • Freire P. Education for Critical Consciousness. New York: Continuum; 1987.
  • Frick MH, Elo O, Haapa K, Heinonen OP, Heinsalmi P, Helo P, Huttunen JK, Kaitaniemi P, Koskinen P, Manninen V, Maenpaa H, Malkonen M, Manttari M, Norola S, Pasternack A, Pikkarainen J, Romo M, Sjoblom T, Nikkila EA. Helsinki Heart Study: Primary-prevention trial with gemfibrozil in middle-aged men with dyslipidemia. Safety of treatment, changes in risk factors, and incidence of coronary heart disease. New England Journal of Medicine. 1987; 317 :1237–1245. [ PubMed : 3313041 ]
  • Friedman LM, Furberg CM, De Mets DL. Fundamentals of Clinical Trials. St. Louis: Mosby-Year Book; 1985.
  • Frosch M, Kaplan RM. Shared decision-making in clinical practice: Past research and future directions. American Journal of Preventive Medicine. 1999; 17 :285–294. [ PubMed : 10606197 ]
  • Gaventa J. The powerful, the powerless, and the experts: Knowledge struggles in an information age. In: Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research In The United States and Canada. Westport, CT: Bergin and Garvey; 1993. pp. 21–40.
  • Gentner D, Stevens A. Mental Models (Cognitive Science). Hillsdale, NJ: Erlbaum; 1983.
  • Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost-Effectiveness in Health And Medicine. New York: Oxford University Press; 1996.
  • Goldman L, Weinstein MC, Goldman PA, Williams LW. Cost-effectiveness of HMG-CoA reductase inhibition. Journal of the American Medical Association. 1991; 6 :1145–1151. [ PubMed : 1899896 ]
  • Golomb BA. Cholesterol and violence: is there a connection? Annals of Internal Medicine. 1998; 128 :478–487. [ PubMed : 9499332 ]
  • Goodman RM. Principles and tools for evaluating community-based prevention and health promotion programs. In: Brownson RC, Baker EA, Novick LF, editors. Community-Based Prevention Programs That Work. Gaithersburg, MD: Aspen; 1999. pp. 211–227.
  • Goodman RM, Wandersman A. FORECAST: A formative approach to evaluating community coalitions and community-based initiatives. Journal of Community Psychology, Supplement. 1994:6–25.
  • Goodman RM, Steckler A, Kegler MC. Mobilizing organizations for health enhancement: Theories of organizational change. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education. San Francisco: Jossey-Bass; 1997. pp. 287–312.
  • Gordon RL, Baker EL, Roper WL, Omenn GS. Prevention and the reforming U.S. health care system: Changing roles and responsibilities for public health. Annual Review of Public Health. 1996; 17 :489–509. [ PubMed : 8724237 ]
  • Gottlieb NH, McLeroy KR. Social health. In: O'Donnell MP, Harris JS, editors. Health promotion in the workplace. Albany, NY: Delmar; 1994. pp. 459–493.
  • Green LW. Evaluation and measurement: Some dilemmas for health education. American Journal of Public Health. 1977; 67 :155–166. [ PMC free article : PMC1653552 ] [ PubMed : 402085 ]
  • Green LW, Gordon NP. Productive research designs for health education investigations. Health-Education. 1982; 13 :4–10.
  • Green LW, Lewis FM. Measurement and Evaluation in Health Education and Health Promotion. Palo Alto, CA: Mayfield; 1986.
  • Green LW, George MA, Daniel M, Frankish CJ, Herbert CJ, Bowie WR, O'Neil M. Study of Participatory Research in Health Promotion. University of British Columbia, Vancouver: The Royal Society of Canada; 1995.
  • Green LW, Richard L, Potvin L. Ecological foundations of health promotion. American Journal of Health Promotion. 1996; 10 :270–281. [ PubMed : 10159708 ]
  • Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care. Annals of Internal Medicine. 1985; 102 :520–528. [ PubMed : 3977198 ]
  • Greenfield S, Kaplan SH, Ware JE, Yano EM, Frank HJL. Patients participation in medical care: Effects on blood sugar control and quality of life in diabetes. Journal of General Internal Medicine. 1988; 3 :448–457. [ PubMed : 3049968 ]
  • Greenwald P. Epidemiology: A step forward in the scientific approach to preventing cancer through chemoprevention. Public Health Reports. 1984; 99 :259–264. [ PMC free article : PMC1424586 ] [ PubMed : 6429723 ]
  • Greenwald P, Cullen JW. A scientific approach to cancer control. CA: A Cancer Journal for Clinicians. 1984; 34 :328–332. [ PubMed : 6437624 ]
  • Griffith HM, Dickey L, Kamerow DB. Put prevention into practice: a systematic approach. Journal of Public Health Management and Practice. 1995; 1 :9–15. [ PubMed : 10186631 ]
  • Guba EG, Lincoln YS. Fourth Generation Evaluation. Newbury Park, CA: Sage; 1989.
  • Hadden SG. Read The Label: Reducing Risk By Providing Information. Boulder, CO: Westview; 1986.
  • Hall BL. From margins to center? The development and purpose of participatory research. American Sociologist. 1992; 23 :15–28.
  • Hancock L, Sanson-Fisher RW, Redman S, Burton R, Burton L, Butler J, Girgis A, Gibberd R, Hensley M, McClintock A, Reid A, Schofield M, Tripodi T, Walsh R. Community action for health promotion: A review of methods and outcomes 1990–1995. American Journal of Preventive Medicine. 1997; 13 :229–239. [ PubMed : 9236957 ]
  • Hancock T. The healthy city from concept to application: Implications forresearch. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 14–24.
  • Hatch J, Moss N, Saran A, Presley-Cantrell L, Mallory C. Community research: partnership in Black communities. American Journal of Preventive Medicine. 1993; 9 :27–31. [ PubMed : 8123284 ]
  • He J, Ogden LG, Vupputuri S, Bazzano LA, Loria C, Whelton PK. Dietary sodium intake and subsequent risk of cardiovascular disease in overweight adults. Journal of the American Medical Association. 1999; 282 :2027–2034. [ PubMed : 10591385 ]
  • Health Care Financing Administration, Department of Health and Human Services. Highlights: National Health Expenditures, 1997. 1998. [Accessed October 31, 1998]. [On-line]. Available: http://www ​.hcfa.gov/stats ​/nhe-oact/hilites.htm .
  • Heaney CA, Goetzel RZ. A review of health-related outcomes of multi-component worksite health promotion programs. American Journal of Health Promotion. 1997; 11 :290–307. [ PubMed : 10165522 ]
  • Hingson R. Prevention of drinking and driving. Alcohol Health and Research World. 1996; 20 :219–226. [ PMC free article : PMC6876524 ] [ PubMed : 31798161 ]
  • Himmelman AT. Communities Working Collaboratively for a Change. University of Minnesota, MN: Humphrey Institute of Public Affairs; 1992.
  • Hollister RG, Hill J. Problems in the evaluation of community-wide initiatives. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH, editors. New Approaches to Evaluating Community Initiatives. Washington, DC: Aspen; 1995. pp. 127–172.
  • Horwitz RI, Daniels SR. Bias or biology: Evaluating the epidemiologic studies of L-tryptophan and the eosinophilia-myalgia syndrome. Journal of Rheumatology Supplement. 1996; 46 :60–72. [ PubMed : 8895182 ]
  • Horwitz RI. Complexity and contradiction in clinical trial research. American Journal of Medicine. 1987a; 82 :498–510. [ PubMed : 3548349 ]
  • Horwitz RI. The experimental paradigm and observational studies of cause-effect relationships in clinical medicine. Journal of Chronic Disease. 1987b; 40 :91–99. [ PubMed : 3805237 ]
  • Horwitz RI, Singer BH, Makuch RW, Viscoli CM. Can treatment that is helpful on average be harmful to some patients? A study of the conflicting information needs of clinical inquiry and drug regulation. Journal of Clinical Epidemiology. 1996; 49 :395–400. [ PubMed : 8621989 ]
  • Horwitz RI, Viscoli CM, Clemens JD, Sadock RT. Developing improved observational methods for evaluating therapeutic effectiveness. American Journal of Medicine. 1990; 89 :630–638. [ PubMed : 1978566 ]
  • House ER. Evaluating with validity. Beverly Hills, CA: Sage; 1980.
  • Hugentobler MK, Israel BA, Schurman SJ. An action research approach to workplace health: Integrating methods. Health Education Quarterly. 1992; 19 :55–76. [ PubMed : 1568874 ]
  • Impicciatore P, Pandolfini C, Casella N, Bonati M. Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home. British Medical Journal. 1997; 314 :1875–1881. [ PMC free article : PMC2126984 ] [ PubMed : 9224132 ]
  • IOM (Institute of Medicine). Reducing the Burden of Injury: Advancing Prevention and Treatment. Washington, DC: National Academy; 1999. [ PubMed : 25101422 ]
  • IOM (Institute of Medicine). Speaking of Health: Assessing Health Communication. In: Chrvala C, Scrimshaw S, editors. Strategies for Diverse Populations. Washington, DC: National Academy Press; 2001.
  • Israel BA.Practitioner-oriented Approaches to Evaluating Health EducationInterventions: Multiple Purposes—Multiple Methods. Paper presented at the National Conference on Health Education and Health Promotion; Tampa, FL. 1994.
  • Israel BA, Schurman SJ. Social support, control and the stress process. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research and Practice. San Francisco: Jossey-Bass; 1990. pp. 179–205.
  • Israel BA, Baker EA, Goldenhar LM, Heaney CA, Schurman SJ. Occupational stress, safety, and health: Conceptual framework and principles for effective prevention interventions. Journal of Occupational Health Psychology. 1996; 1 :261–286. [ PubMed : 9547051 ]
  • Israel BA, Checkoway B, Schulz AJ, Zimmerman MA. Health education and community empowerment: conceptualizing and measuring perceptions of individual, organizational, and community control. Health Education Quarterly. 1994; 21 :149–170. [ PubMed : 8021145 ]
  • Israel BA, Cummings KM, Dignan MB, Heaney CA, Perales DP, Simons-Morton BG, Zimmerman MA. Evaluation of health education programs: Current assessment and future directions. Health Education Quarterly. 1995; 22 :364–389. [ PubMed : 7591790 ]
  • Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health. 1998; 19 :173–202. [ PubMed : 9611617 ]
  • Israel BA, Schurman SJ, House JS. Action research on occupational stress: Involving workers as researchers. International Journal of Health Services. 1989; 19 :135–155. [ PubMed : 2925298 ]
  • Israel BA, Schurman SJ, Hugentobler MK. Conducting action research: Relationships between organization members and researchers. Journal of Applied Behavioral Science. 1992a; 28 :74–101.
  • Israel BA, Schurman SJ, Hugentobler MK, House JS. A participatory action research approach to reducing occupational stress in the United States. In: DiMartino V, editor. Preventing Stress at Work: Conditions of Work Digest. II. Geneva, Switzerland: International Labor Office; 1992b. pp. 152–163.
  • James SA. Racial and ethnic differences in infant mortality and low birth weight: A psychosocial critique. Annals of Epidemiology. 1993; 3 :130–136. [ PubMed : 8269064 ]
  • Johnson-Laird PN. Cognitive Science. 6. New York: Cambridge University Press; 1980. Mental models: Towards a cognitive science of language, inference and consciousness.
  • Kahneman D, Tversky A. Choices, values, and frames. American Psychologist. 1983; 39 :341–350.
  • Kahneman D, Tversky A. On the reality of cognitive illusions. Psychological Review. 1996; 103 :582–591. [ PubMed : 8759048 ]
  • Kalet A, Roberts JC, Fletcher R. How do physicians talk with their patients about risks? Journal of General Internal Medicine. 1994; 9 :402–404. [ PubMed : 7931751 ]
  • Kaplan RM. Value judgment in the Oregon Medicaid experiment. Medical Care. 1994; 32 :975–988. [ PubMed : 7934274 ]
  • Kaplan RM. Profile versus utility based measures of outcome for clinical trials. In: Staquet MJ, Hays RD, Fayers PM, editors. Quality of Life Assessment in Clinical Trials. London: Oxford University Press; 1998. pp. 69–90.
  • Kaplan RM, Anderson JP. The general health policy model: An integrated approach. In: Spilker B, editor. Quality of Life and Pharmacoeconomics in Clinical Trials. Philadephia: Lippencott-Raven; 1996. pp. 309–322.
  • Kasper JF, Mulley AG, Wennberg JE. Developing shared decision-making programs to improve the quality of health care. Quality Review Bulletin. 1992; 18 :183–190. [ PubMed : 1379705 ]
  • Kass D, Freudenberg N. Coalition building to prevent childhood lead poisoning: A case study from New York City. In: Minkler M, editor. Community Organizing and Community Building for Health. New Brunswick, NJ: Rutgers University Press; 1997. pp. 278–288.
  • Kegler MC, Steckler A, Malek SH, McLeroy K. A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research. 1998a; 13 :225–238. [ PubMed : 10181021 ]
  • Kegler MC, Steckler A, McLeroy K, Malek SH. Factors that contribute to effective community health promotion coalitions: A study of 10 Project ASSIST coalitions in North Carolina. American Stop Smoking Intervention Study for Cancer Prevention. Health Education and Behavior. 1998b; 25 :338–353. [ PubMed : 9615243 ]
  • Klein DC. Community Dynamics and Mental Health. New York: Wiley; 1968.
  • Klitzner M. A public health/dynamic systems approach to community-wide alcohol and other drug initiatives. In: Davis RC, Lurigo AJ, Rosenbaum DP, editors. Drugs and the Community. Springfield, IL: Charles C. Thomas; 1993. pp. 201–224.
  • Koepsell TD. Epidemiologic issues in the design of community intervention trials. In: Brownson R, Petitti D, editors. Applied Epidemiology: Theory To Practice. New York: Oxford University Press; 1998. pp. 177–212.
  • Koepsell TD, Diehr PH, Cheadle A, Kristal A. Invited commentary: Symposium on community intervention trials. American Journal of Epidemiology. 1995; 142 :594–599. [ PubMed : 7653467 ]
  • Koepsell TD, Wagner EH, Cheadle AC, Patrick DL, Martin DC, Diehr PH, Perrin EB, Kristal AR, Allan-Andrilla CH, Dey LJ. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health. 1992; 13 :31–57. [ PubMed : 1599591 ]
  • Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. New England Journal of Medicine. 1986; 315 :740–744. [ PubMed : 3748081 ]
  • Kraus JF. Effectiveness of measures to prevent unintentional deaths of infants and children from suffocation and strangulation. Public Health Report. 1985; 100 :231–240. [ PMC free article : PMC1424727 ] [ PubMed : 3920722 ]
  • Kraus JF, Peek C, McArthur DL, Williams A. The effect of the 1992 California motorcycle helmet use law on motorcycle crash fatalities and injuries. Journal of the American Medical Association. 1994; 272 :1506–1511. [ PubMed : 7966842 ]
  • Krieger N. Epidemiology and the web of causation: Has anyone seen the spider? Social Science and Medicine. 1994; 39 :887–903. [ PubMed : 7992123 ]
  • Krieger N, Rowley DL, Herman AA, Avery B, Phillips MT. Racism, sexism and social class: Implications for studies of health, disease and well-being. American Journal of Preventive Medicine. 1993; 9 :82–122. [ PubMed : 8123288 ]
  • La Puma J, Lawlor EF. Quality-adjusted life-years. Ethical implications for physicians and policymakers. Journal of the American Medical Association. 1990; 263 :2917–2921. [ PubMed : 2110986 ]
  • Labonte R. Health promotion and empowerment: reflections on professionalpractice. Health Education Quarterly. 1994; 21 :253–268. [ PubMed : 8021151 ]
  • Lalonde M. A new perspective on the health of Canadians. Ottawa, ON: Ministry of Supply and Services; 1974.
  • Lando HA, Pechacek TF, Pirie PL, Murray DM, Mittelmark MB, Lichtenstein E, Nothwehyr F, Gray C. Changes in adult cigarette smoking in the Minnesota Heart Health Program. American Journal of Public Health. 1995; 85 :201–208. [ PMC free article : PMC1615309 ] [ PubMed : 7856779 ]
  • Lantz PM, House JS, Lepkowski JM, Williams DR, Mero RP, Chen J. Socioeconomic factors, health behaviors, and mortality. Journal of the American Medical Association. 1998; 279 :1703–1708. [ PubMed : 9624022 ]
  • Last J. Redefining the unacceptable. Lancet. 1995; 346 :1642–1643. [ PubMed : 8551816 ]
  • Lather P. Research as praxis. Harvard Educational Review. 1986; 56 :259–277.
  • Lenert L, Kaplan RM. Validity and interpretation of preference-based measures of health-related quality of life. Medical Care. 2000; 38 :138–150. [ PubMed : 10982099 ]
  • Leventhal H, Cameron L. Behavioral theories and the problem of compliance. Patient Education and Counseling. 1987; 10 :117–138.
  • Levine DM, Becker DM, Bone LR, Stillman FA, Tuggle MB II, Prentice M, Carter J, Filippeli J. A partnership with minority populations: A community model of effectiveness research. Ethnicity and Disease. 1992; 2 :296–305. [ PubMed : 1467764 ]
  • Lewin K. Field Theory in Social Science. New York: Harper; 1951.
  • Lewis CE. Disease prevention and health promotion practices of primary care physicians in the United States. American Journal of Preventive Medicine. 1988; 4 :9–16. [ PubMed : 3079144 ]
  • Liao L, Jollis JG, DeLong ER, Peterson ED, Morris KG, Mark DB. Impact of an interactive video on decision making of patients with ischemic heart disease. Journal of General Internal Medicine. 1996; 11 :373–376. [ PubMed : 8803746 ]
  • Lichter AS, Lippman ME, Danforth DN Jr, d'Angelo T, Steinberg SM, deMoss E, MacDonald HD, Reichert CM, Merino M, Swain SM, et al. Mastectomy versus breast-conserving therapy in the treatment of stage I and II carcinoma of the breast: A randomized trial at the National Cancer Institute. Journalof Clinical Oncokgy. 1992; 10 :976–983. [ PubMed : 1588378 ]
  • Lillie-Blanton M, Hoffman SC. Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Services Research. 1995; 30 :225–236. [ PMC free article : PMC1070051 ] [ PubMed : 7721594 ]
  • Lincoln YS, Reason P. Editor's introduction. Qualitative Inquiry. 1996; 2 :5–11.
  • Linville PW, Fischer GW, Fischhoff B. AIDS risk perceptions and decision biases. In: Pryor JB, Reeder GD, editors. The Social Psychology of HIV Infection. Hillsdale, NJ: Lawrence Erlbaum; 1993. pp. 5–38.
  • Lipid Research Clinics Program. The Lipid Research Clinics Coronary Primary Prevention Trial results. I. Reduction in incidence of coronary heart disease. Journal of the American Medical Association. 1984; 251 :351–364. [ PubMed : 6361299 ]
  • Lipkus IM, Hollands JG. The visual communication of risk. Journal of National Cancer Institute Monographs. 1999; 25 :149–162. [ PubMed : 10854471 ]
  • Lipsey MW. Theory as method: Small theories of treatments. New Direction in Program Evaluation. 1993; 57 :5–38.
  • Lipsey MW, Polard JA. Driving toward theory in program evaluation: More models to choose from. Evaluation and Program Planning. 1989; 12 :317–328.
  • Lund AK, Williams AF, Womack KN. Motorcycle helmet use in Texas. Public Health Reports. 1991; 106 :576–578. [ PMC free article : PMC1580316 ] [ PubMed : 1910193 ]
  • Maguire P. School of Education. Amherst, MA: The University of Massachusetts; 1987. Doing Participatory Research: A Feminist Approach.
  • Maguire P. Considering more feminist participatory research: What's congruency got to do with it? Qualitative Inquiry. 1996; 2 :106–118.
  • Marin G, Marin BV. Research with Hispanic Populations. Newbury Park, CA: Sage; 1991.
  • Matt GE, Navarro AM. What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review. 1997; 17 :1–32. [ PubMed : 9125365 ]
  • Mazur DJ, Hickam DH. Patients' preferences for risk disclosure and role in decision making for invasive medical procedures. Journal of General Internal Medicine. 1997; 12 :114–117. [ PMC free article : PMC1497069 ] [ PubMed : 9051561 ]
  • McGraw SA, Stone EJ, Osganian SK, Elder JP, Perry CL, Johnson CC, Parcel GS, Webber LS, Luepker RV. Design of process evaluation within the child and adolescent trial for cardiovascular health (CATCH). Health Education Quarterly. 1994:S5–S26. [ PubMed : 8113062 ]
  • McIntyre S, West P. What does the phrase “safer sex” mean to you? AIDS. 1992; 7 :121–126. [ PubMed : 8442902 ]
  • McKay HG, Feil EG, Glasgow RE, Brown JE. Feasibility and use of an internet support service for diabetes self-management. The Diabetes Educator. 1998; 24 :174–179. [ PubMed : 9555356 ]
  • McKinlay JB. The promotion of health through planned sociopolitical change: challenges for research and policy. Social Science and Medicine. 1993; 36 :109–117. [ PubMed : 8421787 ]
  • McKnight JL. Regenerating community. Social Policy. 1987; 17 :54–58.
  • McKnight JL. Politicizing health care. In: Conrad P, Kern R, editors. The Sociology Of Health And Illness: Critical Perspectives. New York: St. Martin's; 1994. pp. 437–441.
  • McVea K, Crabtree BF, Medder JD, Susman JL, Lukas L, McIlvain HE, Davis CM, Gilbert CS, Hawver M. An ounce of prevention? Evaluation of the ‘Put Prevention into Practice' program. Journal of Family Practice. 1996; 43 :361–369. [ PubMed : 8874371 ]
  • Merz J, Fischhoff B, Mazur DJ, Fischbeck PS. Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxicsand Liability. 1993; 15 :191–215.
  • Minkler M. Health education, health promotion and the open society: An historical perspective. Health Education Quarterly. 1989; 16 :17–30. [ PubMed : 2649456 ]
  • Mittelmark MB, Hunt MK, Heath GW, Schmid TL. Realistic outcomes: Lessons from community-based research and demonstration programs for the prevention of cardiovascular diseases. Journal of Public Health Policy. 1993; 14 :437–462. [ PubMed : 8163634 ]
  • Monahan JL, Scheirer MA. The role of linking agents in the diffusion of health promotion programs. Health Education Quarterly. 1988; 15 :417–434. [ PubMed : 3230017 ]
  • Morgan MG. Fields from Electric Power [brochure]. Pittsburgh, PA: Department of Engineering and Public Policy, Carnegie Mellon University; 1995.
  • Morgan MG, Fischhoff B, Bostrom A, Atman C. Risk Communication:The Mental Models Approach. New York: Cambridge University Press; 2001.
  • Mosteller F, Colditz GA. Understanding research synthesis (meta-analysis). Annual Review of Public Health. 1996; 17 :1–23. [ PubMed : 8724213 ]
  • Muldoon MF, Manuck SB, Matthews KA. Lowering cholesterol concentrations and mortality: A quantitative review of primary prevention trials. British Medical Journal. 1990; 301 :309–314. [ PMC free article : PMC1663605 ] [ PubMed : 2144195 ]
  • Murray D. Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemilogy. 1995; 142 :569–575. [ PubMed : 7653464 ]
  • Murray DM. Dissemination of community health promotion programs: The Fargo-Moorhead Heart Health Program. Journal of School Health. 1986; 56 :375–381. [ PubMed : 3640927 ]
  • Myers AM, Pfeiffle P, Hinsdale K. Building a community-based consortium for AIDS patient services. Public Health Reports. 1994; 109 :555–562. [ PMC free article : PMC1403533 ] [ PubMed : 8041856 ]
  • National Research Council, Committee on Risk Perception and Communication. Improving Risk Communication. Washington, DC: National Academy Press; 1989.
  • NHLBI (National Heart, Lung, and Blood Institute). Guidelines for Demonstration And Education Research Grants. Washington, DC: National Institutes of Health; 1983.
  • NHLBI (National Heart, Lung, and Blood Institute). Report of the Task Force on Behavioral Research in Cardiovascular, Lung, and Blood Health and Disease. Bethesda, MD: National Institutes of Health; 1998.
  • Ni H, Sacks JJ, Curtis L, Cieslak PR, Hedberg K. Evaluation of a statewide bicycle helmet law via multiple measures of helmet use. Archives of Pediatric and Adolescent Medicine. 1997; 151 :59–65. [ PubMed : 9006530 ]
  • Nyden PW, Wiewel W. Collaborative research: harnessing the tensions between researcher and practitioner. American Sociologist. 1992; 24 :43–55.
  • O'Connor PJ, Solberg LI, Baird M. The future of primary care. The enhanced primary care model. Journal of Family Practice. 1998; 47 :62–67. [ PubMed : 9673610 ]
  • Office of Technology Assessment, U.S. Congress. Cost-Effectiveness of Influenza Vaccination. Washington, DC: Office of Technology Assessment; 1981.
  • Oldenburg B, French M, Sallis JF.Health behavior research: The quality of the evidence base. Paper presented at the Society of Behavioral Medicine Twentieth Annual Meeting; San Diego, CA. 1999.
  • Orlandi MA. Health Promotion Technology Transfer: Organizational Perspectives. Canadian Journal of Public Health. 1996a; 87 (Supplement 2):528–533. [ PubMed : 9002340 ]
  • Orlandi MA. Intervening with Drug-Involved Youth: Prevention, Treatment, and Research. Newbury Park, CA: Sage Publications; 1996b. Prevention Technologies for Drug-Involved Youth; pp. 81–100.
  • Orlandi MA. The diffusion and adoption of worksite health promotion innovations: An analysis of barriers. Preventive Medicine. 1986; 15 :522–536. [ PubMed : 3774782 ]
  • Parcel GS, Eriksen MP, Lovato CY, Gottlieb NH, Brink SG, Green LW. The diffusion of school-based tobacco-use prevention programs: Program description and baseline data. Health Education Research. 1989; 4 :111–124.
  • Parcel GS, O'Hara-Tompkins NM, Harris RB, Basen-Engquist KM, McCormick LK, Gottlieb NH, Eriksen MP. Diffusion of an Effective Tobacco Prevention Program. II. Evaluation of the Adoption Phase. Health Education Research. 1995; 10 :297–307. [ PubMed : 10158027 ]
  • Parcel GS, Perry CL, Taylor WC. Beyond Demonstration: Diffusion of Health Promotion Innovations. In: Bracht N, editor. Health Promotion at the Community Level. Thousand Oaks, CA: Sage Publications; 1990. pp. 229–251.
  • Parcel GS, Simons-Morton BG, O'Hara NM, Baranowski T, Wilson B. School promotion of healthful diet and physical activity: Impact on learning outcomes and self-reported behavior. Health Education Quarterly. 1989; 16 :181–199. [ PubMed : 2732062 ]
  • Park P, Brydon-Miller M, Hall B, Jackson T, editors. Voices of Change: Participatory Research in the United States and Canada. Westport, CT: Bergin and Garvey; 1993.
  • Parker EA, Schulz AJ, Israel BA, Hollis R. East Side Village Health Worker Partnership: Community-based health advisor intervention in an urban area. Health Education and Behavior. 1998; 25 :24–45. [ PubMed : 9474498 ]
  • Parsons T. The Social System. Glencoe, IL: Free Press; 1951.
  • Patton MQ. How to Use Qualitative Methods In Evaluation. Newbury Park, CA: Sage Publications; 1987.
  • Patton MQ. Qualitative Evaluation And Research Methods. 2nd Edition. Newbury Park, CA: Sage Publications; 1990.
  • Pearce N. Traditional epidemiology, modern epidemiology and public health. American Journal of Public Health. 1996; 86 :678–683. [ PMC free article : PMC1380476 ] [ PubMed : 8629719 ]
  • Pendleton L, House WC. Preferences for treatment approaches in medical care. Medical Care. 1984; 22 :644–646. [ PubMed : 6748782 ]
  • Pentz MA. Programs and Abstracts. Bethesda, MD: 1998. Research to practice in community-based prevention trials. Preventive intervention research at the crossroads: contributions and opportunities from the behavioral and social sciences; pp. 82–83.
  • Pentz MA, Trebow E. Implementation issues in drug abuse prevention research. Substance Use and Misuse. 1997; 32 :1655–1660. [ PubMed : 1922302 ]
  • Pentz MA, Trebow E, Hansen WB, MacKinnon DP, Dwyer JH, Flay BR, Daniels S, Cormack C, Johnson CA. Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP). Evaluation Review. 1990; 14 :264–289.
  • Perry CL. Cardiovascular disease prevention among youth: Visioning the future. Preventive Medicine. 1999; 29 :S79–S83. [ PubMed : 10641822 ]
  • Perry CL, Murray DM, Griffin G. Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health. 1990; 60 :501–504. [ PubMed : 2283869 ]
  • Plough A, Olafson F. Implementing the Boston Healthy Start Initiative: A case study of community empowerment and public health. Health Education Quarterly. 1994; 21 :221–234. [ PubMed : 8021149 ]
  • Price RH. Prevention programming as organizational reinvention: From research to implementation. In: Silverman MM, Anthony V, editors. Prevention of MentalDisorders, Alcohol and Drug Use in Children and Adolescents. Rockville, MD: Department of Health and Human Services; 1989. pp. 97–123.
  • Price RH.Theory guided reinvention as the key high fidelity prevention practice. Paper presented at the National Institute of Health meeting, “Preventive Intervention Research at the Crossroads: Contributions and Opportunities from the Behavioral and Social Sciences”; Bethesda, MD. 1998.
  • Pronk NP, O'Connor PJ. Systems approach to population health improvement. Journal of Ambulatory Care Management. 1997; 20 :24–31. [ PubMed : 10181620 ]
  • Putnam RD. Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University; 1993.
  • Rabeneck L, Viscoli CM, Horwitz RI. Problems in the conduct and analysis of randomized clinical trials. Are we getting the right answers to the wrong questions? Archives of Internal Medicine. 1992; 152 :507–512. [ PubMed : 1546913 ]
  • Raiffa H. Decision Analysis. Reading, MA: Addison-Wesley; 1968.
  • Reason P. Three approaches to participative inquiry. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage; 1994. pp. 324–339.
  • Reason P, editor. Human Inquiry in Action: Developments in New Paradigm Research. London: Sage; 1988.
  • Reichardt CS, Cook TD. “Paradigms Lost”: Some thoughts on choosing methods in evaluation research. Evaluation and Program Planning: An International Journal. 1980; 3 :229–236.
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. First of two parts. New England Journal of Medicine. 1997a; 337 :543–548. [ PubMed : 9262499 ]
  • Rivara FP, Grossman DC, Cummings P. Injury prevention. Second of two parts. New England Journal of Medicine. 1997b; 337 :613–618. [ PubMed : 9271485 ]
  • Roberts-Gray C, Solomon T, Gottlieb N, Kelsey E. Heart partners: A strategy for promoting effective diffusion of school health promotion programs. Journal of School Health. 1998; 68 :106–116. [ PubMed : 9608451 ]
  • Robertson A, Minkler M. New health promotion movement: A critical examination. Health Education Quarterly. 1994; 21 :295–312. [ PubMed : 8002355 ]
  • Rogers EM. Diffusion of Innovations. 3rd ed. New York: The Free Press; 1983.
  • Rogers EM. Communication of Innovations. New York: The Free Press; 1995.
  • Rogers GB. The safety effects of child-resistant packaging for oral prescription drugs. Two decades of experience. Journal of the American Medical Association. 1996; 275 :1661–1665. [ PubMed : 8637140 ]
  • Rohrbach LA, D'Onofrio C, Backer T, Montgomery S. Diffusion of school' based substance abuse prevention programs. American Behavioral Scientist. 1996; 39 :919–934.
  • Rossi PH, Freeman HE. Evaluation: A Systematic Approach. Newbury Park, CA: Sage Publications; 1989.
  • Rutherford GW. Public health, communicable diseases, and managed care: Will managed care improve or weaken communicable disease control? American Journal of Preventive Medicine. 1998; 14 :53–59. [ PubMed : 9566938 ]
  • Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997.
  • Sarason SB. The Psychological Sense of Community: Prospects for a Community Psychology. San Francisco: Jossey-Bass; 1984.
  • Schein EH. Process Consulting. Reading, MA: Addition Wesley; 1987.
  • Schensul JJ, Denelli-Hess D, Borreo MG, Bhavati MP. Urban comadronas: Maternal and child health research and policy formulation in a Puerto Rican community. In: Stull DD, Schensul JJ, editors. Collaborative Research andSocial Change: Applied Anthropology in Action. Boulder, CO: Westview; 1987. pp. 9–32.
  • Schensul SL. Science, theory and application in anthropology. American Behavioral Scientist. 1985; 29 :164–185.
  • Schneiderman LJ, Kronick R, Kaplan RM, Anderson JP, Langer RD. Effects of offering advance directives on medical treatments and costs. Annals of Internal Medicine. 1992; 117 :599–606. [ PubMed : 1524334 ]
  • Schriver KA. Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication. 1989; 32 :238–255.
  • Schulz AJ, Israel BA, Selig SM, Bayer IS. Development and implementation of principles for community-based research in public health. In: Macnair RH, editor. Research Strategies For Community Practice. New York: Haworth Press; 1998a. pp. 83–110.
  • Schulz AJ, Parker EA, Israel BA, Becker AB, Maciak B, Hollis R. Conducting a participatory community-based survey: Collecting and interpreting data for a community health intervention on Detroit's East Side. Journal of Public Health Management Practice. 1998b; 4 :10–24. [ PubMed : 10186730 ]
  • Schwartz LM, Woloshin S, Black WC, Welch HG. The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine. 1997; 127 :966–972. [ PubMed : 9412301 ]
  • Schwartz N. Self-reports: How the questions shape the answer. American Psychologist. 1999; 54 :93–105.
  • Seligman ME. Science as an ally of practice. American Psychologist. 1996; 51 :1072–1079. [ PubMed : 8870544 ]
  • Shadish WR, Cook TD, Leviton LC. Foundations of Program Evaluation. Newbury Park, CA: Sage Publications; 1991.
  • Shadish WR, Matt GE, Navarro AM, Siegle G, Crits-Christoph P, Hazelrigg MD, Jorm AF, Lyons LC, Nietzel MT, Prout HT, Robinson L, Smith ML, Svartberg M, Weiss B. Evidence that therapy works in clinically representative conditions. Journal of Consulting and Clinical Psychology. 1997; 65 :355–365. [ PubMed : 9170759 ]
  • Sharf BF. Communicating breast cancer on-line: Support and empowerment on the internet. Women and Health. 1997; 26 :65–83. [ PubMed : 9311100 ]
  • Simons-Morton BG, Green WA, Gottlieb N. Health Education and Health Promotion. Prospect Heights, IL: Waveland; 1995.
  • Simons-Morton BG, Parcel GP, Baranowski T, O'Hara N, Forthofer R. Promoting a healthful diet and physical activity among children: Results of a school-based intervention study. American Journal of Public Health. 1991; 81 :986–991. [ PMC free article : PMC1405714 ] [ PubMed : 1854016 ]
  • Singer M. Knowledge for use: Anthropology and community-centered substanceabuse research. Social Science and Medicine. 1993; 37 :15–25. [ PubMed : 8332920 ]
  • Singer M. Community-centered praxis: Toward an alternative non-dominative applied anthropology. Human Organization. 1994; 53 :336–344.
  • Smith DW, Steckler A, McCormick LK, McLeroy KR. Lessons learned about disseminating health curricula to schools. Journal of Health Education. 1995; 26 :37–43.
  • Smithies J, Adams L. Walking the tightrope. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and Practice. New York: Routledge; 1993. pp. 55–70.
  • Solberg LI, Kottke TE, Brekke ML. Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Preventive Medicine. 1998a; 27 :623–631. [ PubMed : 9672958 ]
  • Solberg LI, Kottke TE, Brekke ML, Conn SA, Calomeni CA, Conboy KS. Delivering clinical preventive services is a systems problem. Annals of Behavioral Medicine. 1998b; 19 :271–278. [ PubMed : 9603701 ]
  • Sorensen G, Emmons K, Hunt MK, Johnston D. Implications of the results of community intervention trials. Annual Rreview of Public Health. 1998a; 19 :379–416. [ PubMed : 9611625 ]
  • Sorensen G, Thompson B, Basen-Engquist K, Abrams D, Kuniyuki A, DiClemente C, Biener L. Durability, dissemination and institutionalization of worksite tobacco control programs: Results from the Working Well Trial. International Journal of Behavioral Medicine. 1998b; 5 :335–351. [ PubMed : 16250700 ]
  • Spilker B. Quality of Life and Pharmacoeconomics. In: Spilker B, editor. Clinical Trials. Philadelphia: Lippincott-Raven; 1996.
  • Steckler A, Goodman RM, McLeroy KR, Davis S, Koch G. Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion. 1992; 6 :214–224. [ PubMed : 10148679 ]
  • Steckler AB, Dawson L, Israel BA, Eng E. Community health development: An overview of the works of Guy W. Steuart. Health Education Quarterly. 1993;(Suppl. 1):S3–S20. [ PubMed : 8354649 ]
  • Steckler AB, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly. 1992; 19 :1–8. [ PubMed : 1568869 ]
  • Steuart GW. Social and cultural perspectives: Community intervention and mental health. Health Education Quarterly. 1993:S99. [ PubMed : 8354654 ]
  • Stokols D. Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist. 1992; 47 :6–22. [ PubMed : 1539925 ]
  • Stokols D. Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion. 1996; 10 :282–298. [ PubMed : 10159709 ]
  • Stone EJ, McGraw SA, Osganian SK, Elder JP. Process evaluation in the multicenter Child and Adolescent Trial for Cardiovascular Health (CATCH). Health Education Quarterly. 1994;(Suppl. 2):1–143. [ PubMed : 8113062 ]
  • Stringer ET. Action Research: A Handbook For Practitioners. Thousand Oaks, CA: Sage; 1996.
  • Strull WM, Lo B, Charles G. Do patients want to participate in medical decision making? Journal of the American Medical Association. 1984; 252 :2990–2994. [ PubMed : 6502860 ]
  • Strum S. Consultation and patient information on the Internet: The patients' forum. British Journal of Urology. 1997; 80 :22–26. [ PubMed : 9415081 ]
  • Susser M. The tribulations of trials-intervention in communities. American Journal of Public Health. 1995; 85 :156–158. [ PMC free article : PMC1615322 ] [ PubMed : 7856769 ]
  • Susser M. Choosing a future for epidemiology. I. Eras and paradigms. American Journal of Public Health. 1996a; 86 :668–673. [ PMC free article : PMC1380474 ] [ PubMed : 8629717 ]
  • Susser M, Susser E. From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health. 1996b; 86 :674–677. [ PMC free article : PMC1380475 ] [ PubMed : 8629718 ]
  • Tandon R. Participatory evaluation and research: Main concepts and issues. In: Fernandes W, Tandon R, editors. Participatory Research and Evaluation. New Delhi: Indian Social Institute; 1981. pp. 15–34.
  • Thomas SB, Morgan CH. Evaluation of community-based AIDS education and risk reduction projects in ethnic and racial minority communities. Evaluation and Program Planning. 1991; 14 :247–255.
  • Thompson DC, Nunn ME, Thompson RS, Rivara FP. Effectiveness of bicycle safety helmets in preventing serious facial injury. Journal of the American Medical Association. 1996a; 276 :1974–1975. [ PubMed : 8971067 ]
  • Thompson DC, Rivara FP, Thompson RS. Effectiveness of bicycle safety helmets in preventing head injuries: A case-control study. Journal of the American Medical Association. 1996b; 276 :1968–1973. [ PubMed : 8971066 ]
  • Thompson RS, Taplin SH, McAfee TA, Mandelson MT, Smith AE. Primary and secondary prevention services in clinical practice. Twenty years' experience in development, implementation, and evaluation. Journal of the American Medical Association. 1995; 273 :1130–1135. [ PubMed : 7707602 ]
  • Torrance GW. Toward a utility theory foundation for health status index models. Health Services Research. 1976; 11 :349–369. [ PMC free article : PMC1071938 ] [ PubMed : 1025050 ]
  • Tversky A, Fox CR. Weighing risk and uncertainty. Psychological Review. 1995; 102 :269–283.
  • Tversky A, Kahneman D. Rational choice and the framing of decisions. In: Bell DE, Raiffa H, Tversky A, editors. Decision Making: Descriptive, Normative, And Prescriptive Interactions. Cambridge: Cambridge University Press; 1988. pp. 167–192.
  • Tversky A, Shafir E. The disjunction effect in choice under uncertainty. Psychological Science. 1992; 3 :305–309.
  • U.S. Department of Health and Human Services. Status Report. Washington, DC: NIH Publication #90-3107; 1990. Smoking, Tobacco, and CancerProgram: 1985–1989.
  • Vega WA. Theoretical and pragmatic implications of cultural diversity for community research. American Journal of Community Psychology. 1992; 20 :375–391.
  • Von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. New York: Cambridge University Press; 1986.
  • Wagner E, Austin B, Von Korff M. Organizing care for patients with chronic illness. Millbank Quarterly. 1996; 76 :511–544. [ PubMed : 8941260 ]
  • Wallerstein N. Powerlessness, empowerment, and health: implications for health promotion programs. American Journal of Health Promotion. 1992; 6 :197–205. [ PubMed : 10146784 ]
  • Walsh JME, McPhee SJ. A systems model of clinical preventive care: An analysis of factors influencing patient and physician. Health Education Quarterly. 1992; 19 :157–175. [ PubMed : 1618625 ]
  • Walter HJ. Primary prevention of chronic disease among children: The school-based “Know Your Body Intervention Trials.” Health Education Quarterly. 1989; 16 :201–214. [ PubMed : 2732063 ]
  • Waterworth S, Luker KA. Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing. 1990; 15 :971–976. [ PubMed : 2229694 ]
  • Weisz JR, Weiss B, Donenberg GR. The lab versus the clinic. Effects of child and adolescent psychotherapy. American Psychologist. 1992; 47 :1578–1585. [ PubMed : 1476328 ]
  • Wennberg JE. Shared decision making and multimedia. In: Harris LM, editor. Health and the New Media: Technologist Transforming Personal And Public Health. Mahwah, NJ: Erlbaum; 1995. pp. 109–126.
  • Wennberg JE. The Dartmouth Atlas Of Health Care In the United States. Hanover, NH: Trustees of Dartmouth College; 1998.
  • Whitehead M. The ownership of research. In: Davies JK, Kelly MP, editors. Healthy Cities: Research and practice. New York: Routledge; 1993. pp. 83–89.
  • Williams DR, Collins C. U.S. socioeconomic and racial differences in health: patterns and explanations. Annual Review of Sociology. 1995; 21 :349–386.
  • Windsor R, Baranowski T, Clark N, Cutter G. Evaluation Of Health Promotion, Health Education And Disease Prevention Programs. Mountain View, CA: Mayfield; 1994.
  • Winkleby MA. The future of community-based cardiovascular disease intervention studies. American Journal of Public Health. 1994; 84 :1369–1372. [ PMC free article : PMC1615141 ] [ PubMed : 8092354 ]
  • Woloshin S, Schwartz LM, Byram SJ, Sox HC, Fischhoff B, Welch HG. Women's understanding of the mammography screening debate. Archives of Internal Medicine. 2000; 160 :1434–1440. [ PubMed : 10826455 ]
  • World Health Organization (WHO). Ottawa Charter for Health Promotion. Copenhagen: WHO; 1986.
  • Yates JF. Englewood Cliffs. NJ: Prentice-Hall; 1990. Judgment and Decision Making.
  • Yeich S, Levine R. Participatory research's contribution to a conceptualization of empowerment. Journal of Applied Social Psychology. 1992; 22 :1894–1908.
  • Yin RK. Applied Social Research Methods Series. Vol. 34. Newbury Park, CA: Sage Publications; 1993. Applications of case study research.
  • Zhu SH, Anderson NH. Self-estimation of weight parameter in multi-attribute analysis. Organizational Behavior and Human Decision Processes. 1991; 48 :36–54.
  • Zich J, Temoshok C. Applied methodology: A primer of pitfalls and opportunities in AIDS research. In: Feldman D, Johnson T, editors. The Social Dimensions of AIDS. New York: Praeger; 1986. pp. 41–60.
  • Cite this Page Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001. 7, Evaluating and Disseminating Intervention Research.
  • PDF version of this title (5.9M)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Evaluating and Disseminating Intervention Research - Health and Behavior Evaluating and Disseminating Intervention Research - Health and Behavior

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

COMMENTS

  1. Module 3 Chapter 1: Overview of Intervention/Evaluation Research

    Intervention research typically asks questions related to the outcomes of an intervention effort or approach. However, questions also arise concerning implementation of interventions, separate from understanding their outcomes. Practical, philosophical, and scientific factors contribute to investigators' intervention study approach and design ...

  2. A new framework for developing and evaluating complex interventions

    Phases and core elements of complex intervention research. The framework divides complex intervention research into four phases: development or identification of the intervention, feasibility, evaluation, and implementation (fig 1). A research programme might begin at any phase, depending on the key uncertainties about the intervention in question.

  3. Guidance on how to develop complex interventions to improve health and

    The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation. 1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase ...

  4. (PDF) Intervention as a research strategy

    a theoretical framework (T) to improve a situation (S). W hat Checkland calls the intervention strategy or. methodology is, in effect, a design proposition (in the design science sense) that ...

  5. How to Write an Intervention Plan [+ Template]

    Set a timeline. Next, set a clear prescription for how often and how long an intervention will take place. Record a start date (when the intervention is set to begin) and a duration (the expected length of the intervention cycle). We recommend five to six weeks at a minimum so the intervention has a chance to take hold.

  6. Intervention Mapping: Theory- and Evidence-Based Health Promotion

    Intervention Mapping Steps. The IM intervention development process has six steps: (1) Establish a detailed understanding of the health problem, the population at risk, the behavioral and environmental causes, and the determinants of these behavioral and environmental conditions; and, assess available resources; (2) Describe the behavioral and environmental outcomes, create objectives for ...

  7. Better reporting of interventions: template for intervention

    Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of ...

  8. Steps in Intervention Research: Designing and Developing Social

    This article describes a 5-step model of intervention research. From lessons learned in our work, we develop an outline of core activities in designing and developing social programs. These include...

  9. Developing and Implementing an Intervention Study: Strategies for

    Most research methods textbooks simply describe how an intervention is incorporated into a research study design (Johnson & Christensen, 2016), but rarely discuss steps on how to plan and develop an intervention study, leaving instructors without a guide to mentor students throughout the process.

  10. What Is Intervention Research?

    Whether at the individual, organizational, state, or national level, making a difference usually involves developing and implementing some kind of action strategy. Often too, practice involves optimizing a strategy over time, that is, attempting to improve it. In social work, public health, psychology, nursing, medicine, and other professions ...

  11. Guidance on how to develop complex interventions to improve health and

    The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation.1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase.3 ...

  12. Introduction to Intervention Research

    In this chapter, an overview of the current state in intervention research is provided. Limitations of evidence on the effectiveness of health interventions that is derived from randomized trials, in informing treatment decision-making in practice are highlighted. Disregarding the principles of client-centeredness and the complexity of practice ...

  13. How to Write a Proposed Intervention Research Paper

    Writing a proposed intervention research paper is a critical step in the research process. It outlines your plan for conducting a study that aims to address a specific problem or issue through a carefully designed intervention. This guide will walk you through the essential components of a proposed intervention research paper, providing ...

  14. Section 1. Designing Community Interventions

    Develop an action plan to carry out the intervention. When you are developing your action plan, you will want it to ... (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.), Intervention research: Design and development for human service. (pp. 25-54). New York, NY: Haworth Press. Home;

  15. Framework for the development and evaluation of complex interventions

    The framework aims to improve the design and conduct of complex intervention research to increase its utility, efficiency and impact. Consistent with the principles of increasing the value of research and minimising research waste,22 the framework (1) emphasises the use of diverse research perspectives and the inclusion of research users, clinicians, patients and the public in research teams ...

  16. Steps in Intervention Research

    First, it is through intervention research that programs are developed and refined. Intervention research provides a systematic process in which research findings, empirically grounded theory, and practice knowledge are conjoined either to create new programs or to modify existing ones. Second, intervention research attempts to answer the ...

  17. How to design, implement and evaluate organizational interventions for

    The researchers were recruited through purposeful snowball sampling of researchers involved in organizational intervention research (Vogt & Johnson, Citation 2011). Mode 2 knowledge production differs ... (e.g., Shewhart's cycle of plan-do-study-act) (Taylor et al., Citation 2014). As a result, organizations and researchers would benefit ...

  18. Implementation research: what it is and how to do it

    Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it The field of implementation research is growing, but it is not well understood despite the need for better ...

  19. PDF Chapter 3 Intervention Research: Design and Development of The Life

    Thomas (1994:25-43) are used. One important aim of intervention research is to create the means to improve the health and well-being of community life. Figure 3.1 outlines critical operations or activities in each phase of the intervention research process and is followed by a discussion of how this was applied to the present study.

  20. Intervention Research: Developing Social Programs

    2 Steps in Intervention Research. View chapter. 3 Step 1: Specify the Problem and Develop a Program Theory. View chapter. 4 Step 2: Create and Revise Program Materials. View chapter. 5 Step 3 and Step 4: From Refining Program Components to Testing Effectiveness. View chapter. 6 Step 5: Dissemination of Findings and Program Materials: The ...

  21. Interventions to Support Occupations

    Interventions to support occupations are methods and tasks that prepare the client for occupational performance. ... Research, articles, and books. Expand your knowledge with the latest research, perspectives, and solutions for occupational therapy practitioners. SIS Quarterly Article.

  22. Study designs: Part 4

    In the fourth piece of this series on research study designs, we look at interventional studies (clinical trials). These studies differ from observational studies in that the investigator decides whether or not a participant will receive the exposure (or intervention). In this article, we describe the key features and types of interventional ...

  23. Meet the Class: Margaret Lim, '27

    Please describe your professional background and path. I grew up in a small town in Arkansas before heading to Cornell for college. Researching the diverse struggles of immigrants sparked my interest in finding solutions for social issues, and I moved to England to pursue an MPhil (Master of Philosophy in Evidence-Based Social Intervention and Policy Evaluation) at the University of Oxford.

  24. Types of intervention and their development

    Go to: 1. Introduction to types of intervention and their development. This book is about the evaluation of the effectiveness of health-related interventions. We use the term 'intervention' to apply to any activity undertaken with the objective of improving human health by preventing disease, by curing or reducing the severity or duration ...

  25. Behavioral treatment of spitting in a child with autism spectrum

    Spitting is a socially and hygienically unappealing behavior displayed by some persons who have intellectual and developmental disabilities. In the present study, we conducted a test‐only functional analysis, intervention evaluation, and maintenance assessment with a 12‐year‐old child who had autism spectrum disorder (ASD) and displayed spitting among staff and peers in a special ...

  26. Evaluating and Disseminating Intervention Research

    7. Evaluating and Disseminating Intervention Research. Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate. The principles of science-based interventions cannot be overemphasized.