to remain available.
Your contribution can help change lives.
.
Sixteen training modules
|
Learn how to develop a program or a policy change that focuses on people's behaviors and how changes in the environment can support those behaviors. |
Adapted from "Conducting intervention research: The design and development process" by Stephen B. Fawcett et al.
You've put together a group of motivated, savvy people, who really want to make a difference in the community. Maybe you want to increase adults' physical activity and reduce risks for heart attacks; perhaps you want kids to read more and do better in school. Whatever you want to do, the end is clear enough, but the means--ah, the means are giving you nightmares. How do you reach that goal your group has set for itself? What are the best things to do to achieve it?
Generally speaking, what you're thinking about is intervening in people's environments, making it easier and more rewarding for people to change their behaviors. In the case of encouraging people's physical activity, you might provide information about opportunities, increase access to opportunities, and enhance peer support. Different ways to do this are called, sensibly enough, interventions . Comprehensive interventions combine the various components needed to make a difference.
But what exactly is an intervention? Well, what it is can vary. It might be a program , a change in policy , or a certain practice that becomes popular. What is particularly important about interventions, however is what they do. Interventions focus on people's behaviors, and how changes in the environment can support those behaviors. For example, a group might have the goal of trying to stop men from raping women.
However, it's clearly not enough to broadcast messages saying, "You shouldn't commit a rape." And so, interventions that are more successful attempt to improve the conditions that allow and encourage those behaviors to occur. So interventions that might be used to stop rape include:
There are many strong advantages to using interventions as a means to achieve your goals. Some are very apparent; some possibly less so. Some of the more important of these advantages are:
For example, a grade school principal in the Midwest was struck by the amount of unsupervised free time students had between three and six o'clock, when their parents got home from work. From visiting her own mother in a nursing home, she knew, too, of the loneliness felt by many residents of such homes. So she decided to try to lessen both problems by starting a "Caring Hearts" program. Students went to nursing homes to see elders after school once or twice a week to visit, play games, and exchange stories. Well, a reporter heard about the program, and did a feature article on it on the cover of the "Community Life" section of the local newspaper. The response was tremendous . Parents from all across town wanted their children involved, and similar programs were developed in several schools throughout the town.
It makes sense to develop or redesign an intervention when:
The last of these three points deserves some explanation. There will always be things that your organization could do, that quite probably should be left to other organizations or individuals. For example, a volunteer crisis counseling center might find they have the ability to serve as a shelter for people needing a place to stay for a few nights. However, doing so would strain their resources and take staff and volunteers away from the primary mission of the agency.
In cases like this, where could does not equal should , your organization might want to think twice about developing a new intervention that will take away from the mission.
So, people are mobilized, the coffee's hot, and you're ready to roll. Your group is ready to take on the issue--you want to design an intervention that will really improve conditions in the area. How do you start?
This could be a problem that needs to be solved, such as, "too many students are dropping out of school." However, it might be also a good thing, and you want to find a way to make more of it happen. For example, you might want to find a way to convince more adults to volunteer with school-aged children. At this point, you will probably want to define the problem broadly, as you will be learning more about it in the next few steps. Keep in mind these questions as you think about this:
You don't need to have answers to all of these questions at this point. In fact, it's probably better to keep an open mind until you gather more information, including by talking with people who are affected (we'll get to that in the next few steps ). But thinking about these questions will help orient you and get you geared in the right direction.
You will need to gather information about the level of the problem before you do anything to see if it is as serious as it seems, and to establish a standard for later improvement (or worsening).
Measurement instruments include:
The group might review the level of the problem over time to detect trends--is the problem getting better or worse? It also might gather comparison information-- how are we doing compared to other, similar communities?
In a childhood immunization program, your interventions would be aimed at helping children. Likewise, in a program helping people to live independently, the intervention would try to help older adults or people with disabilities. Your intervention might not be targeted at all, but be for the entire community. For example, perhaps you are trying to increase the amount of policing to make local parks safer. This change of law enforcement policy would affect people throughout the community.
Usually, interventions will target the people who will directly benefit from the intervention, but this isn't always the case. For example, a program to try to increase the number of parents and guardians who bring in their children for immunizations on time would benefit the children most directly. However, interventions wouldn't target them, since children aren't the ones making the decision. Instead, the primary "targets of change" for your interventions might be parents and health care professionals.
Before we go on, some brief definitions may be helpful. Targets of change are those people whose behavior you are trying to change. As we saw above, these people may be--but are not always--the same people who will benefit directly from the intervention. They often include others, such as public officials, who have the power to make needed changes in the environment. Agents of change are those people who can help make change occur. Examples might be local residents, community leaders, and policy makers. The "movers and the shakers," they are the ones who can make things happen--and who you definitely want to contribute to the solution.
Once you have decided broadly what should happen and who it should happen with, you need to make sure you have involved the people affected. Even if you think you know what they want--ask anyway. For your intervention to be successful, you can't have too much feedback. Some of these folks will likely have a perspective on the issue you hadn't even thought of.
Also, by asking for their help, the program becomes theirs. For example, by giving teachers and parents input in designing a "school success" intervention, they take "ownership" for the program. They become proud of it--which means they won't only use it, they?ll also support it and tell their friends, and word will spread.
Again, for ideas on how to find and choose these people, the section mentioned above on targets and agents of change may be helpful.
There are a lot of ways in which you can talk with people affected about the information that interests you. Some of the more common methods include:
When you are talking to people, try and get at the real issue --the one that is the underlying reason for what's going on. It's often necessary to focus not on the problem itself, but on affecting the cause of the problem.
For example, if you want to reduce the number of people in your town who are homeless, you need to find out why so many people in your town lack decent shelter: Do they lack the proper skills to get jobs? Is there a large mentally ill population that isn't receiving the help it should? Your eventual intervention may address deeper causes, seeming to have little to do with reducing homelessness directly, although that remains the goal.
Using the information you gathered in step five, you need to decide on answers to some important questions. These will depend on your situation, but many of the following questions might be appropriate for your purpose:
When you have gotten this far, you are ready to set the broad goals and objectives of what the intervention will do. Remember, at this point you still have NOT decided what that intervention will be. This may seem a little backwards to your normal thinking--but we're starting from the finish line, and asking you to move backwards. Give it a try--we think it will work for you.
Specifically, you will want to answer the following questions as concretely as you can:
Now, armed with all of the information you have found so far, you are ready to start concentrating on the specific intervention itself. The easiest way to start this is by finding out what other people in your situation have done. Don't reinvent the wheel! There might be some "best practices"-- exceptional programs or policies--out there that are close to what you want to do. It's worth taking the time to try to find them.
Where do you look for promising approaches? There are a lot of possibilities, and how exhaustive your search will be will depend on the time and resources you have (not to mention how long it takes you to find something you like!) But some of the more common resources you might start with include:
Take a sheet of paper and write down all of the possibilities you can think of. If you are deciding as a group, this could be done on poster paper attached to a wall, so everyone can see the possibilities-- this often works to help people come up with other ideas. Be creative!
What can your organization afford to do? And by afford, we mean financially, politically, time, and resource wise. For example, how much time can you put into this? Will the group lose stature in the community, or support from certain people, by doing a particular intervention?
When you are considering interventions done by others, look specifically for ones that are:
What barriers and resistance might we face? How can they be overcome? Be prepared for whatever may come your way.
For example, a youth group to prevent substance use wanted to outlaw smoking on the high school campus by everyone, including the teachers and other staff members. However, they knew they would come up against resistance among teachers and staff members who smoked. How might they overcome that opposition?
Here is where we get to the nuts and bolts of designing an intervention.
First, decide the core components that will be used in the intervention. Much like broad strategies, these are the general things you will do as part of the intervention. They are the "big ideas" that can then be further broken down.
There are four classes of components to consider when designing your intervention:
A comprehensive intervention will choose components for each of these four categories. For example, a youth mentoring program might choose the following components:
Next, decide the specific elements that compose each of the components. These elements are the distinct activities that will be done to implement the components.
For example, a comprehensive effort to prevent youth smoking might include public awareness and skills training, restricting tobacco advertising, and modifying access to tobacco products. For the component of trying to modify access, an element of this strategy might be to do 'stings' at convenience stores to see which merchants are selling tobacco illegally to teens. Another element might be to give stiffer penalties to teens who try to buy cigarettes, and to those merchants who sell.
When you are developing your action plan , you will want it to answer the following questions:
None of us likes to fall flat on our face, but frankly, it's a lot easier when there aren't very many people there to watch us, and when there isn't a lot on the line. By testing your intervention on a small scale, you have the chance to work out the bugs and get back on your feet before the crowd comes in. When doing your pilot test, you need to do the following things:
If you have followed all of the steps above, implementing your action plan will be easier. Go to it!
When the wheels are turning and things seem to be under control, congratulations! You have successfully implemented your intervention! But of course, the work never ends. It's important to see if the intervention is working , and to "tweak" it and make changes as necessary.
Designing an intervention, and doing it well, isn't necessarily an easy task. There are a lot of steps involved, and a lot of work to be done, if you are going to do it well. But by systematically going through the process, you are able to catch mistakes before they happen; you can stand on the shoulders of those who have done this work before you and learn from their successes and failures.
Online Resources
Community Health Adviso r from the Robert Wood Johnson Foundation is a helpful online tool with detailed information about evidence-based polices and programs to reduce tobacco use and increase physical activity in communities.
The Society for Community Research and Action serves many different disciplines that are involved in strategies to improve communities. It hosts a general electronic discussion list as well as several by special interest.
The U.S. Dept. of Housing and Urban Development features " Success Stories " and gives ideas for ways to solve problems in your community.
The National Civic League provides a database of Success Stories .
The Pew Partnership for Civic Change offers several resources for promising solutions for building strong communities.
The World Health Organization provides information on many types of interventions around the world.
Print Resources
Fawcett, S., Suarez, Y. Balcazar, F., White, G., Paine, A., Blanchard, K., & Embree, M. (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.), Intervention research: Design and development for human service . (pp. 25-54). New York, NY: Haworth Press.
Intended for healthcare professionals
Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it
The field of implementation research is growing, but it is not well understood despite the need for better research to inform decisions about health policies, programmes, and practices. This article focuses on the context and factors affecting implementation, the key audiences for the research, implementation outcome variables that describe various aspects of how implementation occurs, and the study of implementation strategies that support the delivery of health services, programmes, and policies. We provide a framework for using the research question as the basis for selecting among the wide range of qualitative, quantitative, and mixed methods that can be applied in implementation research, along with brief descriptions of methods specifically suitable for implementation research. Expanding the use of well designed implementation research should contribute to more effective public health and clinical policies and programmes.
Implementation research attempts to solve a wide range of implementation problems; it has its origins in several disciplines and research traditions (supplementary table A). Although progress has been made in conceptualising implementation research over the past decade, 1 considerable confusion persists about its terminology and scope. 2 3 4 The word “implement” comes from the Latin “implere,” meaning to fulfil or to carry into effect. 5 This provides a basis for a broad definition of implementation research that can be used across research traditions and has meaning for practitioners, policy makers, and the interested public: “Implementation research is the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).”
Implementation research can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation, including how to introduce potential solutions into a health system or how to promote their large scale use and sustainability. The intent is to understand what, why, and how interventions work in “real world” settings and to test approaches to improve them.
Implementation research seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects. This implies working with populations that will be affected by an intervention, rather than selecting beneficiaries who may not represent the target population of an intervention (such as studying healthy volunteers or excluding patients who have comorbidities).
Context plays a central role in implementation research. Context can include the social, cultural, economic, political, legal, and physical environment, as well as the institutional setting, comprising various stakeholders and their interactions, and the demographic and epidemiological conditions. The structure of the health systems (for example, the roles played by governments, non-governmental organisations, other private providers, and citizens) is particularly important for implementation research on health.
Implementation research is especially concerned with the users of the research and not purely the production of knowledge. These users may include managers and teams using quality improvement strategies, executive decision makers seeking advice for specific decisions, policy makers who need to be informed about particular programmes, practitioners who need to be convinced to use interventions that are based on evidence, people who are influenced to change their behaviour to have a healthier life, or communities who are conducting the research and taking action through the research to improve their conditions (supplementary table A). One important implication is that often these actors should be intimately involved in the identification, design, and conduct phases of research and not just be targets for dissemination of study results.
Implementation outcome variables describe the intentional actions to deliver services. 6 These implementation outcome variables—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability—can all serve as indicators of the success of implementation (table 1 ⇓ ). Implementation research uses these variables to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
Implementation outcome variables
Curran and colleagues defined an “implementation intervention” as a method to “enhance the adoption of a ‘clinical’ intervention,” such as the use of job aids, provider education, or audit procedures. 7 The concept can be broadened to any type of strategy that is designed to support a clinical or population and public health intervention (for example, outreach clinics and supervision checklists are implementation strategies used to improve the coverage and quality of immunisation).
A review of ways to improve health service delivery in low and middle income countries identified a wide range of successful implementation strategies (supplementary table B). 8 Even in the most resource constrained environments, measuring change, informing stakeholders, and using information to guide decision making were found to be critical to successful implementation.
Other factors that influence implementation may need to be considered in implementation research. Sabatier summarised a set of such factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources). 9
The large array of contextual factors that influence implementation, interact with each other, and change over time highlights the fact that implementation often occurs as part of complex adaptive systems. 10 Some implementation strategies are particularly suitable for working in complex systems. These include strategies to provide feedback to key stakeholders and to encourage learning and adaptation by implementing agencies and beneficiary groups. Such strategies have implications for research, as the study methods need to be sufficiently flexible to account for changes or adaptations in what is actually being implemented. 8 11 Research designs that depend on having a single and fixed intervention, such as a typical randomised controlled trial, would not be an appropriate design to study phenomena that change, especially when they change in unpredictable and variable ways.
Another implication of studying complex systems is that the research may need to use multiple methods and different sources of information to understand an implementation problem. Because implementation activities and effects are not usually static or linear processes, research designs often need to be able to observe and analyse these sometimes iterative and changing elements at several points in time and to consider unintended consequences.
As in other types of health systems research, the research question is the king in implementation research. Implementation research takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used. Implementation research questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective (examples are in supplementary table C). 12 13
Implementation research can overlap with other types of research used in medicine and public health, and the distinctions are not always clear cut. A range of implementation research exists, based on the centrality of implementation in the research question, the degree to which the research takes place in a real world setting with routine populations, and the role of implementation strategies and implementation variables in the research (figure ⇓ ).
Spectrum of implementation research 33
A more detailed description of the research question can help researchers and practitioners to determine the type of research methods that should be used. In table 2 ⇓ , we break down the research question first by its objective: to explore, describe, influence, explain, or predict. This is followed by a typical implementation research question based on each objective. Finally, we describe a set of research methods for each type of research question.
Type of implementation research objective, implementation question, and research methods
Much of evidence based medicine is concerned with the objective of influence, or whether an intervention produces an expected outcome, which can be broken down further by the level of certainty in the conclusions drawn from the study. The nature of the inquiry (for example, the amount of risk and considerations of ethics, costs, and timeliness), and the interests of different audiences, should determine the level of uncertainty. 8 14 Research questions concerning programmatic decisions about the process of an implementation strategy may justify a lower level of certainty for the manager and policy maker, using research methods that would support an adequacy or plausibility inference. 14 Where a high risk of harm exists and sufficient time and resources are available, a probability study design might be more appropriate, in which the result in an area where the intervention is implemented is compared with areas without implementation with a low probability of error (for example, P< 0.05). These differences in the level of confidence affect the study design in terms of sample size and the need for concurrent or randomised comparison groups. 8 14
A wide range of qualitative and quantitative research methods can be used in implementation research (table 2 ⇑ ). The box gives a set of basic questions to guide the design or reporting of implementation research that can be used across methods. More in-depth criteria have also been proposed to assess the external validity or generalisability of findings. 15 Some research methods have been developed specifically to deal with implementation research questions or are particularly suitable to implementation research, as identified below.
Does the research clearly aim to answer a question concerning implementation?
Does the research clearly identify the primary audiences for the research and how they would use the research?
Is there a clear description of what is being implemented (for example, details of the practice, programme, or policy)?
Does the research involve an implementation strategy? If so, is it described and examined in its fullness?
Is the research conducted in a “real world” setting? If so, is the context and sample population described in sufficient detail?
Does the research appropriately consider implementation outcome variables?
Does the research appropriately consider context and other factors that influence implementation?
Does the research appropriately consider changes over time and the level of complexity of the system, including unintended consequences?
Pragmatic trials, or practical trials, are randomised controlled trials in which the main research question focuses on effectiveness of an intervention in a normal practice setting with the full range of study participants. 16 This may include pragmatic trials on new healthcare delivery strategies, such as integrated chronic care clinics or nurse run community clinics. This contrasts with typical randomised controlled trials that look at the efficacy of an intervention in an “ideal” or controlled setting and with highly selected patients and standardised clinical outcomes, usually of a short term nature.
Effectiveness-implementation hybrid designs are intended to assess the effectiveness of both an intervention and an implementation strategy. 7 These studies include components of an effectiveness design (for example, randomised allocation to intervention and comparison arms) but add the testing of an implementation strategy, which may also be randomised. This might include testing the effectiveness of a package of delivery and postnatal care in under-served areas, as well testing several strategies for providing the care. Whereas pragmatic trials try to fix the intervention under study, effectiveness-implementation hybrids also intervene and/or observe the implementation process as it actually occurs. This can be done by assessing implementation outcome variables.
Quality improvement studies typically involve a set of structured and cyclical processes, often called the plan-do-study-act cycle, and apply scientific methods on a continuous basis to formulate a plan, implement the plan, and analyse and interpret the results, followed by an iteration of what to do next. 17 18 The focus might be on a clinical process, such as how to reduce hospital acquired infections in the intensive care unit, or management processes such as how to reduce waiting times in the emergency room. Guidelines exist on how to design and report such research—the Standards for Quality Improvement Reporting Excellence (SQUIRE). 17
Speroff and O’Connor describe a range of plan-do-study-act research designs, noting that they have in common the assessment of responses measured repeatedly and regularly over time, either in a single case or with comparison groups. 18 Balanced scorecards integrate performance measures across a range of domains and feed into regular decision making. 19 20 Standardised guidance for using good quality health information systems and health facility surveys has been developed and often provides the sources of information for these quasi-experimental designs. 21 22 23
Participatory action research refers to a range of research methods that emphasise participation and action (that is, implementation), using methods that involve iterative processes of reflection and action, “carried out with and by local people rather than on them.” 24 In participatory action research, a distinguishing feature is that the power and control over the process rests with the participants themselves. Although most participatory action methods involve qualitative methods, quantitative and mixed methods techniques are increasingly being used, such as for participatory rural appraisal or participatory statistics. 25 26
Mixed methods research uses both qualitative and quantitative methods of data collection and analysis in the same study. Although not designed specifically for implementation research, mixed methods are particularly suitable because they provide a practical way to understand multiple perspectives, different types of causal pathways, and multiple types of outcomes—all common features of implementation research problems.
Many different schemes exist for describing different types of mixed methods research, on the basis of the emphasis of the study, the sampling schemes for the different components, the timing and sequencing of the qualitative and quantitative methods, and the level of mixing between the qualitative and quantitative methods. 27 28 Broad guidance on the design and conduct of mixed methods designs is available. 29 30 31 A scheme for good reporting of mixed methods studies involves describing the justification for using a mixed methods approach to the research question; describing the design in terms of the purpose, priority, and sequence of methods; describing each method in terms of sampling, data collection, and analysis; describing where the integration has occurred, how it has occurred, and who has participated in it; describing any limitation of one method associated with the presence of the other method; and describing any insights gained from mixing or integrating methods. 32
Implementation research aims to cover a wide set of research questions, implementation outcome variables, factors affecting implementation, and implementation strategies. This paper has identified a range of qualitative, quantitative, and mixed methods that can be used according to the specific research question, as well as several research designs that are particularly suited to implementation research. Further details of these concepts can be found in a new guide developed by the Alliance for Health Policy and Systems Research. 33
Implementation research has its origins in many disciplines and is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention
In health research, these intentions can be policies, programmes, or individual practices (collectively called interventions)
Implementation research seeks to understand and work in “real world” or usual practice settings, paying particular attention to the audience that will use the research, the context in which implementation occurs, and the factors that influence implementation
A wide variety of qualitative, quantitative, and mixed methods techniques can be used in implementation research, which are best selected on the basis of the research objective and specific questions related to what, why, and how interventions work
Implementation research may examine strategies that are specifically designed to improve the carrying out of health interventions or assess variables that are defined as implementation outcomes
Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability
Cite this as: BMJ 2013;347:f6753
Contributors: All authors contributed to the conception and design, analysis and interpretation, drafting the article, or revising it critically for important intellectual content, and all gave final approval of the version to be published. NT had the original idea for the article, which was discussed by the authors (except OA) as well as George Pariyo, Jim Sherry, and Dena Javadi at a meeting at the World Health Organization (WHO). DHP and OA did the literature reviews, and DHP wrote the original outline and the draft manuscript, tables, and boxes. OA prepared the original figure. All authors reviewed the draft article and made substantial revisions to the manuscript. DHP is the guarantor.
Funding: Funding was provided by the governments of Norway and Sweden and the UK Department for International Development (DFID) in support of the WHO Implementation Research Platform, which financed a meeting of authors and salary support for NT. DHP is supported by the Future Health Systems research programme consortium, funded by DFID for the benefit of developing countries (grant number H050474). The funders played no role in the design, conduct, or reporting of the research.
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support for the submitted work as described above; NT and TA are employees of the Alliance for Health Policy and Systems Research at WHO, which is supporting their salaries to work on implementation research; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Invited by journal; commissioned by WHO; externally peer reviewed.
When social workers draw on experience, theory, or data in order to develop new strategies or enhance existing ones, they are conducting intervention research. This relatively new field involves program design, implementation, and evaluation and requires a theory-based, systematic approach. Intervention Research presents such a framework. The five-step strategy described in this brief but thorough book ushers the reader from an idea’s germination through the process of writing a treatment manual, assessing program efficacy and effectiveness, and disseminating findings. Rich with examples drawn from child welfare, school-based prevention, medicine, and juvenile justice, Intervention Research relates each step of the process to current social work practice. It also explains how to adapt interventions for new contexts, and provides extensive examples of intervention research in fields such as child welfare, school-based prevention, medicine, and juvenile justice, and offers insights about changes and challenges in the field. This innovative pocket guide will serve as a solid reference for those already in the field, as well as help the next generation of social workers develop skills to contribute to the evolving field of intervention research.
Sign in with a library card.
Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:
Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.
Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.
If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.
Enter your library card number to sign in. If you cannot sign in, please contact your librarian.
Society member access to a journal is achieved in one of the following ways:
Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:
If you do not have a society account or have forgotten your username or password, please contact your society.
Some societies use Oxford Academic personal accounts to provide access to their members. See below.
A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.
Some societies use Oxford Academic personal accounts to provide access to their members.
Click the account icon in the top right to:
Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.
For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.
Our books are available by subscription or purchase to libraries and institutions.
Month: | Total Views: |
---|---|
February 2024 | 1 |
July 2024 | 2 |
July 2024 | 2 |
July 2024 | 2 |
July 2024 | 2 |
July 2024 | 2 |
July 2024 | 3 |
July 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
September 2024 | 2 |
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
Meet the class: margaret lim, ’27, human welfare advocate with a global mindset aims to harness the law.
Margaret, Lim, ’27, is a graduate of Cornell University where she double majored in government and English. A proud daughter of Filipino immigrant parents, Lim grew up in Searcy, Arkansas, later traveling to England to study at Oxford University. Her experience abroad was illuminating; it inspired her interest in global issues and communities. Now at UChicago Law, she hopes to use her legal education to advocate for human welfare around the world.
I grew up in a small town in Arkansas before heading to Cornell for college. Researching the diverse struggles of immigrants sparked my interest in finding solutions for social issues, and I moved to England to pursue an MPhil (Master of Philosophy in Evidence-Based Social Intervention and Policy Evaluation) at the University of Oxford.
Living in Oxford, with its vibrant international community, exposed me to opinions, ideas, and perspectives I had not encountered before. This exchange of information has inspired my interest in global issues and the collaborations needed to solve complex, systemic problems.
I hope to harness the power of words to protect global communities. I know that law school will provide the intellectual challenge and practical skills I need to achieve my goals.
Crescat scientia; vita excolatur . “Let knowledge grow from more to more and so be human life enriched.” The school motto resonates deeply with me, and I look forward to being in a community that promotes interdisciplinary learning, discussion and respect of various beliefs, and learning for learning’s sake.
While I am eager to explore the many facets of legal education at UChicago Law and am open to an array of possibilities, I am currently interested in how research, legal systems, and social services can promote human welfare and protect populations. I am excited to gain insight from the Law School community as I navigate my career path.
I most look forward to learning from my professors and peers as well as engaging with all of the topics, skills, and knowledge that the Law School has to offer! I cannot wait to engage with an enriching and vibrant community full of legal scholars and students.
In my free time, I enjoy exploring new places, reading books, going on nature walks, and spending time with family and friends. I also like searching for the best pad Thai spot.
I am scuba diving certified! When I was fourteen years old, I visited my parents’ home country, the Philippines. We explored my Lola’s backyard full of coral reefs. I would love to dive there again and explore other aquatic ecosystems in the world, like the Great Barrier Reef.
I am a proud daughter of loving, hard-working immigrant parents who have been champions of my education and inspired me to use my skills to better humanity. I am grateful to them, as well as all my family, friends, professors, and mentors who support me.
Warning: The NCBI web site requires JavaScript to function. more...
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Institute of Medicine (US) Committee on Health and Behavior: Research, Practice, and Policy. Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences. Washington (DC): National Academies Press (US); 2001.
Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.
The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.
Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.
The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients.
During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior ( Greenwald, 1984 ; Greenwald and Cullen, 1984 ; NHLBI, 1983 ): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions.
Choice of outcome measures.
The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease ( Frick et al., 1987 ; Lipid Research Clinics Program, 1984 ). Those trials usually were not associated with reductions in death from all causes ( Golomb, 1998 ; Muldoon et al, 1990 ). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death ( Bigger, 1984 ). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous ( Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989 ).
Many behavioral intervention trials document the capacity of interventions to modify risk factors ( NHLBI, 1998 ), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs.
One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy ( Gold et al., 1996 ; Kaplan and Anderson, 1996 ) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient's health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory ( Torrence, 1976 ). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory ( Bennett and Torrence, 1996 ). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk ( Tversky and Fox, 1995 ). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum ( Zhu and Anderson, 1991 ). Some evidence supports the use of simple rating scales ( Anderson and Zalinski, 1990 ). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making ( Tversky and Shafir, 1992 ). A significant body of studies demonstrates that different methods for estimating preferences will produce different values ( Lenert and Kaplan, 2000 ). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states.
The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists ( Kaplan, 1994 ). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups ( Kaplan, 1998 ), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994 , for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis ( Gold et al., 1996 ).
Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990 ). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented ( Spilker, 1996 ). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient.
Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life ( Schneiderman et al., 1992 ). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine ( Gold et al., 1996 ).
Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below.
Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive ( Sackett et al., 1997 ). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1 . Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) ( Friedman et al., 1985 ). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight.
Research Evidence Hierarchy.
There has been concern about the generalizability of RCTs ( Feinstien and Horwitz, 1997 ; Horwitz, 1987a , b ; Horwitz and Daniels, 1996 ; Horwitz et al., 1996 , 1990 ; Rabeneck et al., 1992 ), specifically because the recruitment of participants can result in samples that are not representative of the population ( Seligman, 1996 ). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings.
Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness ( Heaney and Goetzel, 1997 ; Mosteller and Colditz, 1996 ). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa ( Compas et al., 1998 ).
Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the effects of a clinical intervention ( Chambless and Hollon, 1998 ) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial ( Matt and Navarro, 1997 ), others suggest that interventions are less effective in clinical settings than in the laboratory ( Weisz et al., 1992 ), and others find particular interventions equally effective in experimental and clinical settings ( Shadish et al., 1997 ).
The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments ( Chambless and Hollon, 1998 ). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders ( Compas et al., 1998 ).
An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.
Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research ( Flay, 1986 ; Green et al., 1995 ; Greenwald and Cullen, 1984 ). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community ( McKinlay, 1993 ; Susser, 1995 ). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations.
Review criteria of community interventions have been suggested by Hancock and colleagues ( Hancock et al., 1997 ). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication.
Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power ( Koepsell et al., 1995 ; Murray, 1995 ). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities ( Israel et al., 1995 ; Klitzner, 1993 ; McKinlay, 1993 ; Susser, 1995 ). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention ( Fisher, 1995 ; McKinlay, 1993 ). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher's ability to control, it might be impossible to identify and maintain a commensurate comparison community ( Green et al., 1996 ; Hollister and Hill, 1995 ; Israel et al., 1995 ; Klitzner, 1993 ; Mittelmark et al., 1993 ; Susser, 1995 ). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” ( Green et al., 1996 , p. 274).
Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work ( Koepsell, 1998 ; Koepsell et al., 1992 ). This is the contribution of the Theory of Change model for communities described in Chapter 6 . A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work ( Bickman, 1987 ; Donaldson et al., 1994 ; Lipsey, 1993 ; Lipsey and Polard, 1989 ). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory ( Donaldson et al., 1994 ).
Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change ( Fisher, 1995 ; Susser, 1995 ; Winkleby, 1994 ). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research ( Israel et al., 1998 ).
As reviewed in Chapter 4 , broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health ( Brown, 1991 ; Gottlieb and McLeroy, 1994 ; Krieger, 1994 ; Krieger et al., 1993 ; Lalonde, 1974 ; Lantz et al., 1998 ; McKinlay, 1993 ; Sorensen et al., 1998a , b ; Stokols, 1992 , 1996 ; Susser and Susser, 1996a , b ; Williams and Collins, 1995 ; World Health Organization [WHO], 1986 ). It also points to the importance of expanding the evaluation of interventions to incorporate such factors ( Fisher, 1995 ; Green et al., 1995 ; Hatch et al., 1993 ; Israel et al., 1995 ; James, 1993 ; Pearce, 1996 ; Sorensen et al., 1998a , b ; Steckler et al., 1992 ; Susser, 1995 ).
This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others ( Israel et al., 1998 ). The partners contribute “unique strengths and shared responsibilities” ( Green et al., 1995 , p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members ( Dressler, 1993 ; Eng and Blanchard, 1990–1 ; Hatch et al., 1993 ; Israel et al., 1998 ; Schulz et al., 1998a ). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” ( Hatch et al., 1993 , p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 ( Israel et al., 1998 ). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved.
Rationale for Community-Based Participatory Research.
Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation ( Israel et al., 1998 ). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process ( Cornwall, 1996 ; Dockery, 1996 ; Green et al., 1995 ). Although the principles are presented here as distinct items, community-based participatory research integrates them.
Principles of Community-Based Participatory Research and Evaluation.
There are four major foci of evaluation with implications for research design: context, process, impact, and outcome ( Israel, 1994 ; Israel et al., 1995 ; Simons-Morton et al., 1995 ). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature ( Campbell and Stanley, 1963 ; Cook and Reichardt, 1979 ; Dignan, 1989 ; Green, 1977 ; Green and Gordon, 1982 ; Green and Lewis, 1986 ; Guba and Lincoln, 1989 ; House, 1980 ; Israel et al., 1995 ; Patton, 1987 , 1990 ; Rossi and Freeman, 1989 ; Shadish et al., 1991 ; Stone et al., 1994 ; Thomas and Morgan, 1991 ; Windsor et al., 1994 ; Yin, 1993 ).
Context encompasses the events, influences, and changes that occur naturally in the project setting or environment during the intervention that might affect the outcomes ( Israel et al., 1995 ). Context data provide information about how particular settings facilitate or impede program success. Decisions must be made about which of the many factors in the context of an intervention might have the greatest effect on project success.
Evaluation of process assesses the extent, fidelity, and quality of the implementation of interventions ( McGraw et al., 1994 ). It describes the actual activities of the intervention and the extent of participant exposure, provides quality assurance, describes participants, and identifies the internal dynamics of program operations ( Israel et al., 1995 ).
A distinction is often made in the evaluation of interventions between impact and outcome ( Green and Lewis, 1986 ; Israel et al., 1995 ;
Simons-Morton et al., 1995 ; Windsor et al., 1994 ). Impact evaluation assesses the effectiveness of the intervention in achieving desired changes in targeted mediators. These include the knowledge, attitudes, beliefs, and behavior of participants. Outcome evaluation examines the effects of the intervention on health status, morbidity, and mortality. Impact evaluation focuses on what the intervention is specifically trying to change, and it precedes an outcome evaluation. It is proposed that if the intervention can effect change in some intermediate outcome (“impact”), the “final“ outcome will follow.
Although the association between impact and outcome may not always be substantiated (as discussed earlier in this chapter), impact may be a necessary measure. In some instances, the outcome goals are too far in the future to be evaluated. For example, childhood cardiovascular risk factor intervention studies typically measure intermediate gains in knowledge ( Parcel et al., 1989 ) and changes in diet or physical activity ( Simons-Morton et al., 1991 ). They sometimes assess cholesterol and blood pressure, but they do not usually measure heart disease because that would not be expected to occur for many years.
Given the aims and the dynamic context within which community-based participatory research and evaluation are conducted, methodologic flexibility is essential. Methods must be tailored to the purpose of the research and evaluation and to the context and interests of the community ( Beery and Nelson, 1998 ; deKoning and Martin, 1996 ; Dockery, 1996 ; Dressler, 1993 ; Green et al., 1995 ; Hall, 1992 ; Hatch et al., 1993 ; Israel et al., 1998 ; Marin and Marin, 1991 ; Nyden and Wiewel, 1992 ; Schulz et al., 1998b ; Singer, 1993 ; Stringer, 1996 ). Numerous researchers have suggested greater use of qualitative data, from in-depth interviews and observational studies, for evaluating the context, process, impact, and outcome of community-based participatory research interventions (Fortmann et al., 1995; Goodman, 1999 ; Hugentobler et al., 1992 ; Israel et al., 1995 , 1998 ; Koepsell et al., 1992 ; Mittelmark et al., 1993 ; Parker et al., 1998 ; Sorensen et al., 1998a ; Susser, 1995 ). Triangulation is the use of multiple methods and sources of data to overcome limitations inherent in each method and to improve the accuracy of the information collected, thereby increasing the validity and credibility of the results ( Denzin, 1970 ; Israel et al., 1995 ; Reichardt and Cook, 1980 ; Steckler et al., 1992 ). For examples of the integration of qualitative and quantitative methods in research and evaluation of public-health interventions, see Steckler et al. (1992) and Parker et al. (1998) .
Despite the importance of legislation and regulation to promote public health, the effectiveness of government interventions are poorly understood. In particular, policymakers often cannot answer important empirical questions: do legal interventions work and at what economic and social cost? In particular, policymakers need to know whether legal interventions achieve their intended goals (e.g., reducing risk behavior). If so, do legal interventions unintentionally increase other risks (risk/risk tradeoff)? Finally, what are the adverse effects of regulation on personal or economic liberties and general prosperity in society? This is an important question not only because freedom has an intrinsic value in democracy, but also because activities that dampen economic development can have health effects. For example, research demonstrates the positive correlation between socioeconomic status and health ( Chapter 4 ).
Legal interventions often are not subjected to rigorous research evaluation. The research that has been done, moreover, has faced challenges in methodology. There are so many variables that can affect behavior and health status (e.g., differences in informational, physical, social, and cultural environments) that it can be extraordinarily difficult to demonstrate a causal relationship between an intervention and a perceived health effect. Consider the methodologic constraints in identifying the effects of specific drunk-driving laws. Several kinds of laws can be enacted within a short period, so it is difficult to isolate the effect of each law. Publicity about the problem and the legal response can cross state borders, making state comparisons more difficult. Because people who drive under the influence of alcohol also could engage in other risky driving behaviors (e.g., speeding, failing to wear safety belts, running red lights), researchers need to control for changes in other highway safety laws and traffic law enforcement. Subtle differences between comparison communities can have unanticipated effects on the impact of legal interventions ( DeJong and Hingson, 1998 ; Hingson, 1996 ).
Despite such methodologic challenges, social science researchers have studied legal interventions, often with encouraging results. The social science, medical, and behavioral literature contains evaluations of interventions in several public health areas, particularly in relation to injury prevention ( IOM, 1999 ; Rivara et al., 1997a , b ). For example, studies have evaluated the effectiveness of regulations to prevent head injuries (bicycle helmets: Dannenberg et al., 1993 ; Kraus et al., 1994 ; Lund et al., 1991 ; Ni et al., 1997 ; Thompson et al., 1996a , b ), choking and suffocation (refrigerator disposal and warning labels on thin plastic bags: Kraus, 1985 ), child poisoning (childproof packaging: Rogers, 1996 ), and burns (tap water: Erdmann et al., 1991 ). One regulatory measure that has received a great deal of research attention relates to reductions in cigarette-smoking ( Chapter 6 ).
Legal interventions can be an important part of strategies to change behaviors. In considering them, government and other public health agencies face difficult and complex tradeoffs between population health and individual rights (e.g., autonomy, privacy, liberty, property). One example is the controversy over laws that require motorcyclists to wear helmets. Ethical concerns accompany the use of legal interventions to mandate behavior change and must be part of the deliberation process.
It is not enough to demonstrate that a treatment benefits some patients or community members. The demand for health programs exceeds the resources available to pay for them so that treatments provide clinical benefit and value for money. Investigators, clinicians, and program planners must demonstrate that their interventions constitute a good use of resources.
Well over $ 1 trillion is spent on health care each year in the United States. Current estimates suggest that expenditures on health care exceed $4000 per person ( Health Care Financing Administration, 1998 ). Investments are made in health care to produce good health status for the population, and it is usually assumed that more investment will lead to greater health. Some expenditures in health care produce relatively little benefit; others produce substantial benefits. Cost-effectiveness analysis (CEA) can help guide the use of resources to achieve the greatest improvement in health status for a given expenditure.
Consider the medical interventions in Table 7-4 , all of which are wellknown, generally accepted, and widely used. Some are traditional medical care and some are preventive programs. To emphasize the focus on increasing good health, the table presents the data in units of health bought for $1 million rather than in dollars per unit of health, the usual approach in CEA. The life-year is the most comprehensive unit measure of health. Table 7-4 reveals several important points about resource allocation. There is tremendous variation among the interventions in what can be accomplished for $1 million; which nets 7,750 life-years if used for influenza vaccinations for the elderly, 217 life-years if applied to smoking-cessation programs, but only 2 life-years if used to supply Lovastatin to men aged 35–44 who have high total cholesterol but no heart disease and no other risk factors for heart disease.
Life-Years Yielded by Selected Interventions per $1 Million, 1997 Dollars.
How effectively an intervention contributes to good health depends not only on the intervention, but also on the details of its use. Antihypertensive medication is effective, but Propranolol is more cost-effective than Captopril. Thyroid screening is more cost-effective in women than in men. Lovastatin produces more good health when targeted at older high-risk men than at younger low-risk men. Screening for cervical cancer at 3-year intervals with the Pap smear yields 36 life-years per $1 million (compared with no screening), but each $1 million spent to increase the frequency of screening to 2 years brings only 1 additional life-year.
The numbers in Table 7-4 illustrate a central concept in resource allo-cation: opportunity cost. The true cost of choosing to use a particular intervention or to use it in a particular way is not the monetary cost per se, but the health benefits that could have been achieved if the money had been spent on another service instead. Thus, the opportunity cost of providing annual Pap smears ($1 million) rather than smoking-cessation programs is the 217 life-years that could have been achieved through smoking cessation.
The term cost-effectiveness is commonly used but widely misunderstood. Some people confuse cost-effectiveness with cost minimization. Cost minimization aims to reduce health care costs regardless of health outcomes. CEA does not have cost-reduction per se as a goal but is designed to obtain the most improvement in health for a given expenditure. CEA also is often confused with cost/benefit analysis (CBA), which compares investments with returns. CBA ranks the amount of improved health associated with different expenditures with the aim of identifying the appropriate level of investment. CEA indicates which intervention is preferable given a specific expenditure.
Usually, costs are represented by the net or difference between the total costs of the intervention and the total costs of the alternative to that intervention. Typically, the measure of health is the QALY. The net health effect of the intervention is the difference between the QALYs produced by an intervention and the QALYs produced by an alternative or other comparative base.
Comprehensive as it is, CEA does not include everything that might be relevant to a particular decision—so it should never be used mechanically. Decision-makers can have legitimate reasons to emphasize particular groups, benefits, or costs more heavily than others. Furthermore, some decisions require information that cannot be captured easily in a CEA, such as the effect of an intervention on individual privacy or liberty.
CEA is an analytical framework that arises from the question of which ways of promoting good health—procedures, tests, medications, educational programs, regulations, taxes or subsidies, and combinations and variations of these—provide the most effective use of resources. Specific recommendations about behavioral and psychosocial interventions will contribute the most to good health if they are set in this larger context and based on information that demonstrates that they are in the public interest. However, comparing behavioral and psychosocial interventions with other ways of promoting health on the basis of cost-effectiveness requires additional research. Currently there are too few studies that meet this standard to support such recommendations.
A basic assumption underlying intervention research is that tested interventions found to be effective are disseminated to and implemented in clinics, communities, schools, and worksites. However, there is a sizable gap between science and practice ( Anderson, 1998 ; Price, 1989 , 1998 ). Researchers and practitioners need to ensure that an intervention is effective, and that the community or organization is prepared to adopt, implement, disseminate, and institutionalize it. There also is a need for demonstration research (phase V) to explain more about the process of dissemination itself.
Biomedical research results are commonly reported in the mass media. Nearly every day people are given information about the risks of disease, the benefits of treatment, and the potential health hazards in their environments. They regularly make health decisions on the basis of their understanding of such information. Some evidence shows that lay people often misinterpret health risk information ( Berger and Hendee, 1989 ; Fischhoff, 1999a ) as do their doctors ( Kalet et al., 1994 ; Kong et al., 1986 ). On the question of such a widely publicized issue as mammography, for example, evidence suggests that women overestimate their risk of getting breast cancer by a factor of at least 20 and that they overestimate the benefits of mammography by a factor of 100 ( Black et al., 1995 ). In a study of 500 female veterans ( Schwartz et al., 1997 ), half the women over-estimated their risk of death from breast cancer by a factor of 8. This did not appear to be because the subjects thought that they were more at risk than other women; only 10% reported that they were at higher risk than the average woman of their age. The topic of communication of health messages to the public is discussed at length in an IOM report, Speaking of Health: Assessing Health Communication. Strategies for Diverse Populations ( IOM, 2001 ).
Improving communication requires understanding what information the public needs. That necessitates both descriptive and normative analyses, which consider what the public believes and what the public should know, respectively. Juxtaposing normative and descriptive analyses might provide guidance for reducing misunderstanding ( Fischhoff and Downs, 1997 ). Formal normative analysis of decisions involves the creation of decision trees, showing the available options and the probabilities of various outcomes of each, whose relative attractiveness (or aversiveness) must be evaluated by people. Although full analyses of decision problems can be quite complex, they often reveal ways to drastically simplify individuals' decision-making problems—in the sense that they reveal a small number of issues of fact or value that really merit serious attention ( Clemen, 1991 ; Merz et al., 1993 ; Raiffa, 1968 ). Those few issues can still pose significant challenges for decision makers. The actual probabilities can differ from people's subjective probabilities (which govern their behavior). For example, a woman who overestimates the value of a mammogram might insist on tests that are of little benefit to her and mistrust the political/ medical system that seeks to deny such care ( Woloshin et al., 2000 ). Obtaining estimates of subjective probabilities is difficult. Although eliciting probabilities has been studied in other contexts over the past two generations ( von Winterfeldt and Edwards, 1986 ; Yates, 1990 ), it has received much less attention in medical contexts, where it can pose questions that people are unwilling or unable to confront ( Fischhoff and Bruine de Bruin, 1999 ).
In addition to such quantitative beliefs, people often need a qualitative understanding of the processes by which risks are created and controlled. This allows them to get an intuitive feeling for the quantitative estimates, to feel competent to make decisions in their own behalf, to monitor their own experience, and to know when they need help ( Fischhoff, 1999b ; Leventhal and Cameron, 1987 ). Not seeing the world in the same way as scientists do also can lead lay people to misinterpret communications directed at them. One common (and some might argue, essential) strategy for evaluating any public health communication or research instrument is to ask people to think aloud as they answer draft versions of questions ( Ericsson and Simon, 1994 ; Schriver, 1989 ). For example, subjects might be asked about the probability of getting HIV from unprotected sexual activity. Reasons for their assessments might be explored as they elaborate on their impressions and the assumptions they use ( Fischhoff, 1999b ; McIntyre and West, 1992 ). The result should both reveal their intuitive theories and improve the communication process.
When people must evaluate their options, the way in which information is framed can have a substantial effect on how it is used ( Kahneman and Tversky, 1983 ; Schwartz, 1999 ; Tversky and Kahneman, 1988 ). The fairest presentation of risk information might be one in which multiple perspectives are used ( Kahneman and Tversky, 1983 , 1996 ). For example, one common situation involves small risks that add up over the course of time, through repeated exposures. The chances of being injured in an automobile crash are very small for any one outing, whether or not the driver wears a seatbelt. However, driving over a lifetime creates a substantial risk—and a substantial benefit for seatbelt use. One way to communicate that perspective is to do the arithmetic explicitly, so that subjects understand it ( Linville et al., 1993 ). Another method that helps people to understand complex information involves presenting ranges rather than best estimates. Science is uncertain, and it should be helpful for people to understand the intervals within which their risks are likely to fall ( Lipkus and Hollands, 1999 ).
Risk communication can be improved. For example, many members of the public have been fearful that proximity to electromagnetic fields and power lines can increase the risk of cancer. Studies revealed that many people knew very little about properties of electricity. In particular, they usually were unaware that exposure decreases as a function of the cube root of distance from the lines. After studying mental models of this risk, Morgan (1995) developed a tiered brochure that presented the problem at a variety of risks. The brochure addressed common misconceptions and explained why scientists disagree about the risks posed by electromagnetic fields. Participants on each side of the debate reviewed the brochure for fairness. Several hundred thousand copies of the brochure have now been distributed. This approach to communication requires that the public listen to experts, but it also requires that the experts listen to the public. Providing information is not enough; it is necessary to take the next step to demonstrate that the information is presented in an unbiased fashion and that the public accurately processes what is offered ( Edworthy and Adams, 1997 ; Hadden, 1986 ; Morgan et al., 2001 ; National Research Council, 1989 ).
The electromagnetic field brochure is an example of a general approach in cognitive psychology, in which communications are designed to create coherent mental models of the domain being considered ( Ericsson and Simon, 1994 ; Fischhoff, 1999b ; Gentner and Stevens, 1983 ; Johnson-Laird, 1980 ). The bases of these communications are formal models of the domain. In the case of the complex processes creating and controlling risks, the appropriate representation is often an influence diagram, a directed graph that captures the uncertain relationships among the factors involved ( Clemen, 1991 ; Morgan et al., 2001 ). Creating such a diagram requires pooling the knowledge of diverse disciplines, rather than letting each tell its own part of the story. Identifying the critical messages requires considering both the science of the risk and recipients' intuitive conceptualizations.
Research results are commonly misinterpreted. When a study shows that the effect of a treatment is statistically significant, it is often assumed that the treatment works for every patient or at least for a high percentage of those treated. In fact, large experimental trials, often with considerable publicity, promote treatments that have only minor effects in most patients. For example, contemporary care for high blood serum cholesterol has been greatly influenced by results of the Coronary Primary Prevention Trial or CPPT Lipid Research Clinics Program, 1984 , in which men were randomly assigned to take a placebo or cholestyramine. Cholestyramine can significantly lower serum cholesterol and, in this trial, reduced it by an average of 8.5%. Men in the treatment group experienced 24% fewer heart attack deaths and 19% fewer heart attacks than did men who took the placebo.
The CPPT showed a 24% reduction in cardiovascular mortality in the treated group. However, the absolute proportions of patients who died of cardiovascular disease were similar in the 2 groups: there were 38 deaths among 1900 participants (2%) in the placebo group and 30 deaths among 1906 participants (1.6%) in the cholestyramine group. In other words, taking the medication for 6 years reduced the chance of dying from cardiovascular disease from 2% to 1.6%.
Because of the difficulties in communicating risk ratio information, the use of simple statistics, such as the number needed to treat (NNT), has been suggested ( Sackett et al., 1997 ). NNT is the number of people that must be treated to avoid one bad outcome. Statistically, NNT is defined as the reciprocal of the absolute-risk reduction. In the cholesterol example, if 2% (0.020) of the patients died in the control arm of an experiment and 1.6% (0.016) died in the experimental arm, the absolute risk reduction is 0.020–0.016=0.004. The reciprocal of 0.004 is 250. In this case, 250 people would have to be treated for 6 years to avoid 1 death from coronary heart disease. Treatments can harm as well as benefit, so in addition to calculating the NNT, it is valuable to calculate the number needed to harm (NNH). This is the number of people a clinician would need to treat to produce one adverse event. NNT and NNH can be modified for those in particular risk groups. The advantage of these simple numbers is that they allow much clearer communication of the magnitude of treatment effectiveness.
Once patients understand the complex information about outcomes, they can fully participate in the decision-making process. The final step in disseminating information to patients involves an interactive process that allows patients to make informed choices about their own health-care.
Despite a growing consensus that they should be involved, evidence suggests that patients are rarely consulted. Wennberg (1995) outlined a variety of common medical decisions in which there is uncertainty. In each, treatment selection involves profiles of risks and benefits for patients. Thiazide medications can be effective at controlling blood pressure, they also can be associated with increased serum cholesterol; the benefit of blood pressure reduction must be balanced against such side effects as dizziness and impotence.
Factors that affect patient decision making and use of health services are not well understood. It is usually assumed that use of medical services is driven primarily by need, that those who are sickest or most disabled use services the most ( Aday, 1998 ). Although illness is clearly the major reason for service use, the literature on small-area variation demonstrates that there can be substantial variability in service use among communities that have comparable illness burdens and comparable insurance coverage ( Wennberg, 1998 ). Therefore, social, cultural, and system variables also contribute to service use.
The role of patients in medical decision making has undergone substantial recent change. In the early 1950s, Parsons (1951) suggested that patients were excluded from medical decision making unless they assumed the “sick role,” in which patients submit to a physician's judgment, and it is assumed that physicians understand the patients' preferences. Through a variety of changes, patients have become more active. More information is now available, and many patients demand a greater role ( Sharf, 1997 ). The Internet offers vast amounts of information to patients; some of it misleading or inaccurate ( Impicciatore et al., 1997 ). One difficulty is that many patients are not sophisticated consumers of technical medical information ( Strum, 1997 ).
Another important issue is whether patients want a role. The literature is contradictory on this point; at least eight studies have addressed the issue. Several suggest that most patients express little interest in participating ( Cassileth et al., 1980 ; Ende et al., 1989 ; Mazur and Hickam, 1997 ; Pendleton and House, 1984 ; Strull et al., 1984 ; Waterworth and Luker, 1990 ). Those studies challenge the basis of shared medical decision making. Is it realistic to engage patients in the process if they are not interested? Deber ( Deber, 1994 ; Deber et al., 1996 ) has drawn an important distinction between problem solving and decision making. Medical problem solving requires technical skill to make an appropriate diagnosis and select treatment. Most patients prefer to leave those judgments in the hands of experts ( Ende et al., 1989 ). Studies challenging the notion that patients want to make decisions typically asked questions about problem solving ( Ende et al., 1989 ; Pendleton and House, 1984 ; Strull et al., 1984 ).
Shared decision making requires patients to express personal preferences for desired outcomes, and many decisions involve very personal choices. Wennberg (1998) offers examples of variation in health care practices that are dominated by physician choice. One is the choice between mastectomy and lumpectomy for women with well-defined breast cancer. Systematic clinical trials have shown that the probability of surviving breast cancer is about equal after mastectomy and after lumpectomy followed by radiation ( Lichter et al., 1992 ). But in some areas of the United States, nearly half of women with breast cancer have mastectomies (for example, Provo, Utah); in other areas less than 2% do (for example, New Jersey; Wennberg, 1998 ). Such differences are determined largely by surgeon choice; patient preference is not considered. In the breast cancer example, interviews suggest that some women have a high preference for maintaining the breast, and others feel more comfortable having more breast tissue removed. The choices are highly personal and reflect variations in comfort with the idea of life with and without a breast. Patients might not want to engage in technical medical problem solving, but they are the only source of information about preferences for potential outcomes.
The process by which patients exercise choice can be difficult. There have been several evaluations of efforts to involve patients in decision making. Greenfield and colleagues (1985) taught patients how to read their own medical records and offered coaching on what questions to ask during encounters with physicians. In this randomized trial involving patients with peptic ulcer disease, those assigned to a 20-minute treatment had fewer functional limitations and were more satisfied with their care than were patients in the control group. A similar experiment involving patients treated for diabetes showed that patients randomly assigned to receive visit preparation scored significantly better than controls on three dimensions of health-related quality of life (mobility, role performance, physical activity). Furthermore, there were significant improvements for biochemical measures of diabetes control ( Greenfield et al., 1988 ).
Many medical decisions are more complex than those studied by Greenfield and colleagues. There are usually several treatment alternatives, and the outcomes for each choice are uncertain. Also, the importance of the outcomes might be valued differently by different people. Shared decision-making programs have been proposed to address those concerns ( Kasper et al., 1992 ). The programs usually use electronic media. Some involve interactive technologies in which a patient becomes familiar with the probabilities of various outcomes. With some technologies, the patient also has the opportunity to witness others who have embarked on different treatments. Video allows a patient to witness the outcomes of others who have made each treatment choice. A variety of interactive programs have been systematically evaluated. In one study ( Barry et al., 1995 ), patients with benign prostatic hyperplasia were given the opportunity to use an interactive video. The video was generally well received, and the authors reported that there was a significant reduction in the rate of surgery and an increase in the proportion who chose “watchful waiting” after using the decision aid. Flood et al. (1996) reported similar results with an interactive program.
Not all evaluations of decision aids have been positive. In one evaluation of an impartial video for patients with ischemic heart disease, ( Liao et al., 1996 ) 44% of the patients found it helpful for making treatment choices but more than 40% reported that it increased their anxiety ( Liao et al., 1996 ). Most of the patients had received advice from their physicians before watching the video.
Despite enthusiasm for shared medical decision making, little systematic research has evaluated interventions to promote it ( Frosch and Kaplan, 1999 ). Systematic experimental trials are needed to determine whether the use of shared decision aids enhances patient outcomes. Although decision aids appear to enhance patient satisfaction, it is unclear whether they result in reductions in surgery, as suggested by Wennberg (1998) , or in improved patient outcomes ( Frosch and Kaplan, 1999 ).
The effect of any preventive intervention depends both on its ability to influence health behavior change or reduce health risks and on the extent to which the target population has access to and participates in the program. Few preventive interventions are free-standing in the community. Rather, organizations serve as “hosts” for health promotion and disease prevention programs. Once a program has proven successful in demonstration projects and efficacy trials, it must be adopted and implemented by new organizations. Unfortunately, diffusion to new organizations often proceeds very slowly ( Murray, 1986 ; Parcel et al., 1990 ).
A staged change process has been proposed for optimal diffusion of preventive interventions to new organizations. Although different researchers have offered a variety of approaches, there is consensus on the importance of at least four stages ( Goodman et al., 1997 ):
Research investigating the diffusion of health behavior change programs to new organizations can be seen, for example, in adoption of prevention curricula by schools and of preventive services by medical care practices.
Schools are important because they allow consistent contact with children over their developmental trajectory and they provide a place where acquisition of new information and skills is normative ( Orlandi, 1996b ). Although much emphasis has been placed on developing effective health behavior change curricula for students throughout their school years, the literature is replete with evaluations of school-based curricula that suggest that such programs have been less than successful ( Bush et al., 1989 ; Parcel et al., 1990 ; Rohrbach et al., 1996 ; Walter, 1989 ). Challenges or barriers to effective diffusion of the programs include organizational issues, such as limited time and resources, few incentives for the organization to give priority to health issues, pressure to focus on academic curricula to improve student performance on proficiency tests, and unclear role delineation in terms of responsibility for the program; extra-organizational issues or “environmental turbulence,” such as restructuring of schools, changing school schedules or enrollments, uncertainties in public funding; and characteristics of the programs that make them incompatible with the potential host organizations, such as being too long, costly, and complex ( Rohrbach et al., 1996 ; Smith et al., 1995 ).
Initial or traditional efforts to enhance diffusion focused on the characteristics of the intervention program, but more recent studies have focused on the change process itself Two NCI-funded studies to diffuse tobacco prevention programs throughout schools in North Carolina and Texas targeted the four stages of change and were evaluated through randomized, controlled trials ( Goodman et al., 1997 ; Parcel et al., 1989 , 1995 ; Smith et al., 1995 ; Steckler et al., 1992 ). Teacher-training interventions appeared to enhance the likelihood of implementation in each study (an effect that has been replicated in other investigations; see Perry et al., 1990 ). However, other strategies (e.g., process consultation, newsletters, self-paced instructional video) were less successful at enhancing adoption and institutionalization. None of the strategies attempted to change the organizing arrangements (such as reward systems or role responsibilities) of the school districts to support continued implementation of the program.
These results suggest that further reliance on organizational change theory might greatly enhance the diffusion of programs more rapidly and thoroughly. For example, Rohrbach et al. (1996 , pp. 927–928) suggest that “change agents and school personnel should work as a team to diagnose any problems that may impede program implementation and develop action plans to address them [and that]…change agents need to promote the involvement of teachers, as well as that of key administrators, in decisions about program adoption and implementation.” These suggestions are clearly consistent with an organizational development approach. Goodman and colleagues (1997) suggest that the North Carolina intervention might have been more effective had it included more participative problem diagnosis and action planning, and had consultation been less directive and more oriented toward increasing the fit between the host organization and the program.
Primary care medical practices have long been regarded as organizational settings that provide opportunities for health behavior interventions. With the growth of managed care and its financial incentives for prevention, these opportunities are even greater ( Gordon et al., 1996 ). Much effort has been invested in the development of effective programs and processes for clinical practices to accomplish health behavior change. However, the diffusion of such programs to medical practices has been slow (e.g., Anderson and May, 1995 ; Lewis, 1988 ).
Most systemic programs encourage physicians, nurses, health educators, and other members of the health-professional team to provide more consistent change-related statements and behavioral support for health-enhancing behaviors in patients ( Chapter 5 ). There might be fundamental aspects of a medical practice that support or inhibit efforts to improve health-related patient behavior ( Walsh and McPhee, 1992 ). Visual reminders to stay up-to-date on immunizations, to stop smoking cigarettes, to use bicycle helmets, and to eat a healthy diet are examples of systemic support for patient activation and self-care ( Lando et al., 1995 ). Internet support for improved self-management of diabetes has shown promise ( McKay et al., 1998 ). Automated chart reminders to ask about smoking status, update immunizations, and ensure timely cancer-screening examinations—such as Pap smears, mammography, and prostate screening—are systematic practice-based improvements that increase the rate of success in reaching stated goals on health process and health behavior measures ( Cummings et al., 1997 ). Prescription forms for specific telephone callback support can enhance access to telephone-based counseling for weight loss, smoking cessation, and exercise and can make such behavioral teaching and counseling more accessible ( Pronk and O'Connor, 1997 ). Those and other structural characteristics of clinical practices are being used and evaluated as systematic practice-based changes that can improve treatment for, and prevention of, various chronic illnesses ( O'Connor et al., 1998 ).
Barriers to diffusion include physician factors, such as lack of training, lack of time, and lack of confidence in one's prevention skills; health-care system factors, such as lack of health-care coverage and inadequate reimbursement for preventive services in fee-for-service systems; and office organization factors, such as inflexible office routines, lack of reminder systems, and unclear assignment of role responsibilities ( Thompson et al., 1995 ; Wagner et al., 1996 ).
The capitated financing of many managed-care organizations greatly reduces system barriers. Interventions that have focused solely on physician knowledge and behavior have not been very effective. Interventions that also addressed office organization factors have been more effective ( Solberg et al., 1998b ; Thompson et al., 1995 ). For example, the diffusion of the Put Prevention Into Practice (PPIP) program ( Griffith et al., 1995 ), a comprehensive federal effort, was recommended by the U.S. Preventive Services Task Force and is distributed by federal agencies and through professional associations. Using a case study approach, McVea and colleagues (1996) studied the implementation of the program in family practice settings. They found that PPIP was “used not at all or only sporadically by the practices that had ordered the kit” (p. 363). The authors suggested that the practices that provided selected preventive services did not adopt the PPIP because they did not have the organizational skills and resources to incorporate the prevention systems into their office routines without external assistance.
Descriptive research clearly indicates a need for well-conceived and methodologically-rigorous diffusion research. Many of the barriers to more rapid and effective diffusion are clearly “systems problems” ( Solberg et al., 1998b ). Thus, even though the results are somewhat mixed, recent work applying systems approaches and organizational development strategies to the diffusion dilemma is encouraging. In particular, the emphasis on building internal capacity for diffusion of the preventive interventions—for example, continuous quality improvement teams ( Solberg et al., 1998a ) and the identification and training of “program champions” within the adopting systems ( Smith et al., 1995 )—seems crucial for institutionalization of the programs.
This section examines three aspects of dissemination: the need for dissemination of effective community interventions, community readiness for interventions, and the role of dissemination research.
Dissemination requires the identification of core and adaptive elements of an intervention ( Pentz et al., 1990 ; Pentz and Trebow, 1997 ; Price, 1989 ). Core elements are features of an intervention program or policy that must be replicated to maintain the integrity of the interventions as they are transferred to new settings. They include theoretically based behavior change strategies, targeting of multiple levels of influence, and the involvement of empowered community leaders ( Florin and Wandersman, 1990 ; Pentz, 1998 ). Practitioners need training in specific strategies for the transfer of core elements ( Bero et al., 1998 ; Orlandi, 1986 ). In addition, the amount of intervention delivered and its reach into the targeted population might have to be unaltered to replicate behavior change in a new setting. Research has not established a quantitative “dose” of intervention or a quantitative guide for the percentage of core elements that must be implemented to achieve behavior change. Process evaluation can provide guidance regarding the desired intensity and fidelity to intervention protocol. Botvin and colleagues (1995) , for example, found that at least half the prevention program sessions needed to be delivered to achieve the targeted effects in a youth drug abuse prevention program. They also found that increased prevention effects were associated with fidelity to the intervention protocol, which included standardized training of those implementing the program, implementation within 2 weeks of that training, and delivery of at least two program sessions or activities per week ( Botvin et al., 1995 ).
Adaptive elements are features of an intervention that can be tailored to local community, organizational, social, and economic realities of a new setting without diluting the effectiveness of the intervention ( Price, 1989 ). Adaptations might include timing and scheduling or culturally meaningful themes through which the educational and behavior change strategies are delivered.
Community and organizational factors might facilitate or hinder the adoption, implementation, and maintenance of innovative interventions. Diffusion theory assumes that the unique characteristics of the adopter (such as community, school, or worksite) interact with the specific attributes of the innovation (risk factor targets) to determine whether and when an innovation is adopted and implemented ( Emmons et al., 2000 ; Rogers, 1983 , 1995 ). Rogers (1983 , 1995) has identified characteristics that predict the adoption of innovations in communities and organizations. For example, an innovation that has a relative advantage over the idea or activity that it supersedes is more likely to be adopted. In the case of health promotion, organizations might see smoke-free worksites as having a relative advantage not only for employee health, but also for the reduction of absenteeism. An innovation that is seen as compatible with adopters' sociocultural values and beliefs, with previously introduced ideas, or with adopters' perceived needs for innovation is more likely to be implemented. The less complex, and clearer the innovation, the more likely it is to be adopted. For example, potential adopters are more likely to change their health behaviors when educators provide clear specification of the skills needed to change the behaviors. Trialability is the degree to which an innovation can be experimented with on a limited basis. In nutrition education, adopters are more likely to prepare low-fat recipes at home if they have an opportunity to taste the results in a class or supermarket and are given clear, simple directions for preparing them. Finally, observability is the degree to which the results of an innovation are visible to others. In health behavior change, an example of observability might be attention given to a health promotion program by the popular press ( Pentz, 1998 ; Rogers, 1983 ).
The ability to identify effective interventions and explain the characteristics of communities and organizations that support dissemination of those interventions provides the basic building blocks for dissemination. It is necessary, however, to learn more about how dissemination occurs to increase its effectiveness ( Pentz, 1998 ). What are the core elements of interventions, and how can they be adapted ( Price, 1989 )? How do the predictors of diffusion function in the dissemination process ( Pentz, 1998 )? What characteristics of community leaders are associated with dissemination of prevention programs? What personnel and material resources are needed to implement and maintain prevention programs? How can written materials and training in program implementation be provided to preserve fidelity to core elements ( Price, 1989 )?
Dissemination research could help identify alternatives to conceptualizing transfer of intervention technology from research to the practice setting. Rather than disseminating an exact replication of specific tested interventions, program transfer might be based on core and adaptive intervention components at both the individual and community organizational levels ( Blaine et al., 1997 ; Perry 1999 ). Dissemination might also be viewed as replicating a community-based participatory research process, or as a planning process that incorporates core components ( Perry 1999 ), rather than exact duplication of all aspects of intervention activities.
The principles of community-based participatory research presented here could be operationalized and used as criteria for examining the extent to which these dimensions were disseminated to other projects. The guidelines developed by Green and colleagues (1995) for classifying participatory research projects also could be used. Similarly, based on her research and experience with children and adolescents in school health behavior change programs, Perry (1999) developed a guidebook that outlines a 10-step process for developing communitywide health behavior programs for children and adolescents.
To address complex health issues effectively, organizations increasingly form links with one another to form either dyadic connections (pairs) or networks ( Alter and Hage, 1992 ). The potential benefits of these interorganizational collaborations include access to new information, ideas, materials, and skills; minimization of duplication of effort and services; shared responsibility for complex or controversial programs; increased power and influence through joint action; and increased options for intervention (e.g., one organization might not experience the political constraints that hamper the activities of another; Butterfoss et al., 1993 ). However, interorganizational linkages have costs. Time and resources must be devoted to the formation and maintenance of relationships. Negotiating the assessment and planning processes can take a longer time. And sometimes an organization can find that the policies and procedures of other organizations are incompatible with its own ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ).
One way a dyadic linkage between organizations can serve health-promoting goals grows out of the diffusion of innovations through organizations. An organization can serve as a “linking agent” ( Monahan and Scheirer, 1988 ), facilitating the adoption of a health innovation by organizations that are potential implementors. For example, the National Institute for Dental Research (NIDR) developed a school-based program to encourage children to use a fluoride mouth rinse to prevent caries. Rather than marketing the program directly to the schools, NIDR worked with state agencies to promote the program. In a national study, Monahan and Scheirer (1988) found that when state agencies devoted more staff to the program and located a moderate proportion of their staff in regional offices (rather than in a central office) there was likely to be a larger proportion of school districts implementing the program. Other programs, such as the Heart Partners program of the American Heart Association ( Roberts-Gray et al., 1998 ), have used the concept of linking agents to diffuse preventive interventions. Studies of these approaches attempt to identify the organizational policies, procedures, and priorities that permit the linking agent to successfully reach a large proportion of the organizations that might implement the health behavior program. However, the research in this area does not allow general conclusions or guidelines to be drawn.
Interorganizational networks are commonly used in community-wide health initiatives. Such networks might be composed of similar organizations that coordinate service delivery (often called consortia) or organizations from different sectors that bring their respective resources and expertise to bear on a complex health problem (often called coalitions). Multihospital systems or linkages among managed-care organizations and local health departments for treating sexually transmitted diseases ( Rutherford, 1998 ) are examples of consortia. The interorganizational networks used in Project ASSIST and COMMIT, major NCI initiatives to reduce the prevalence of smoking, are examples of coalitions ( U.S. Department of Health and Human Services, 1990 ).
Stage theory has been applied to the formation and performance of interorganizational networks ( Alter and Hage, 1992 ; Goodman and Wandersman, 1994 ). Various authors have posited somewhat different stages of development, but they all include: initial actions, to form the coalition; the formalization of the mission, structure, and processes of the coalition; planning, development, and implementation of programmatic activities; and accomplishment of the coalition's health goals. Stage theory suggests that different strategies are likely to facilitate success at different stages of development ( Lewin, 1951 ; Schein, 1987 ). The complexity, formalization, staffing patterns, communication and decision-making patterns, and leadership styles of the interorganizational network will affect its ability to progress toward its goals ( Alter and Hage, 1992 ; Butterfoss et al., 1993 ; Kegler et al., 1998a , b ).
In 1993, Butterfoss and colleagues reviewed the literature on community coalitions and found “relatively little empirical evidence” (p. 315) to bring to bear on the assessment of their effectiveness. Although the use of coalitions in community-wide health promotion continues, the accumulation of evidence supporting their effectiveness is still slim. Several case studies suggest that coalitions and consortia can be successful in bringing about changes in health behaviors, health systems, and health status (e.g., Butterfoss et al., 1998 ; Fawcett et al., 1997 ; Kass and Freudenberg, 1997 ; Myers et al., 1994 ; Plough and Olafson, 1994 ). However, the conditions under which coalitions are most likely to thrive and the strategies and processes that are most likely to result in effective functioning of a coalition have not been consistently identified empirically.
Evaluation models, such as the FORECAST model ( Goodman and Wandersman, 1994 ) and the model proposed by the Work Group on Health Promotion and Community Development at the University of Kansas ( Fawcett et al., 1997 ), address the lack of systematic and rigorous evaluation of coalitions. These models provide strategies and tools for assessing coalition functioning at all stages of development, from initial formation to ultimate influence on the coalition's health goals and objectives. They are predicated on the assumption that the successful passage through each stage is necessary, but not sufficient, to ensure successful passage through the next stage. Widespread use of these and other evaluation frameworks and tools can increase the number and quality of the empirical studies of the effects of interorganizational linkages.
Orlandi (1996a) states that diffusion failures often result from a lack of fit between the proposed host organization and the intervention program. Thus, he suggests that if the purpose is to diffuse an existing program, the design of the program and the process of diffusion need to be flexible enough to adapt to the needs and resources of the organization. If the purpose is to develop and disseminate a new program, innovation development and transfer process should be integrated. Those conclusions are consistent with some of the studies reviewed above. For example, McVea et al. (1996) concluded that a “one size fits all” approach to clinical preventive systems was not likely to diffuse effectively.
Other titles in this collection.
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
COMMENTS
Intervention research typically asks questions related to the outcomes of an intervention effort or approach. However, questions also arise concerning implementation of interventions, separate from understanding their outcomes. Practical, philosophical, and scientific factors contribute to investigators' intervention study approach and design ...
Phases and core elements of complex intervention research. The framework divides complex intervention research into four phases: development or identification of the intervention, feasibility, evaluation, and implementation (fig 1). A research programme might begin at any phase, depending on the key uncertainties about the intervention in question.
The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation. 1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase ...
a theoretical framework (T) to improve a situation (S). W hat Checkland calls the intervention strategy or. methodology is, in effect, a design proposition (in the design science sense) that ...
Set a timeline. Next, set a clear prescription for how often and how long an intervention will take place. Record a start date (when the intervention is set to begin) and a duration (the expected length of the intervention cycle). We recommend five to six weeks at a minimum so the intervention has a chance to take hold.
Intervention Mapping Steps. The IM intervention development process has six steps: (1) Establish a detailed understanding of the health problem, the population at risk, the behavioral and environmental causes, and the determinants of these behavioral and environmental conditions; and, assess available resources; (2) Describe the behavioral and environmental outcomes, create objectives for ...
Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of ...
This article describes a 5-step model of intervention research. From lessons learned in our work, we develop an outline of core activities in designing and developing social programs. These include...
Most research methods textbooks simply describe how an intervention is incorporated into a research study design (Johnson & Christensen, 2016), but rarely discuss steps on how to plan and develop an intervention study, leaving instructors without a guide to mentor students throughout the process.
Whether at the individual, organizational, state, or national level, making a difference usually involves developing and implementing some kind of action strategy. Often too, practice involves optimizing a strategy over time, that is, attempting to improve it. In social work, public health, psychology, nursing, medicine, and other professions ...
The UK Medical Research Council (MRC) published influential guidance on developing and evaluating complex interventions, presenting a framework of four phases: development, feasibility/piloting, evaluation and implementation.1 The development phase is what happens between the idea for an intervention and formal pilot testing in the next phase.3 ...
In this chapter, an overview of the current state in intervention research is provided. Limitations of evidence on the effectiveness of health interventions that is derived from randomized trials, in informing treatment decision-making in practice are highlighted. Disregarding the principles of client-centeredness and the complexity of practice ...
Writing a proposed intervention research paper is a critical step in the research process. It outlines your plan for conducting a study that aims to address a specific problem or issue through a carefully designed intervention. This guide will walk you through the essential components of a proposed intervention research paper, providing ...
Develop an action plan to carry out the intervention. When you are developing your action plan, you will want it to ... (1994). Conducting intervention research: The design and development process. In J. Rothman & E. J. Thomas (Eds.), Intervention research: Design and development for human service. (pp. 25-54). New York, NY: Haworth Press. Home;
The framework aims to improve the design and conduct of complex intervention research to increase its utility, efficiency and impact. Consistent with the principles of increasing the value of research and minimising research waste,22 the framework (1) emphasises the use of diverse research perspectives and the inclusion of research users, clinicians, patients and the public in research teams ...
First, it is through intervention research that programs are developed and refined. Intervention research provides a systematic process in which research findings, empirically grounded theory, and practice knowledge are conjoined either to create new programs or to modify existing ones. Second, intervention research attempts to answer the ...
The researchers were recruited through purposeful snowball sampling of researchers involved in organizational intervention research (Vogt & Johnson, Citation 2011). Mode 2 knowledge production differs ... (e.g., Shewhart's cycle of plan-do-study-act) (Taylor et al., Citation 2014). As a result, organizations and researchers would benefit ...
Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it The field of implementation research is growing, but it is not well understood despite the need for better ...
Thomas (1994:25-43) are used. One important aim of intervention research is to create the means to improve the health and well-being of community life. Figure 3.1 outlines critical operations or activities in each phase of the intervention research process and is followed by a discussion of how this was applied to the present study.
2 Steps in Intervention Research. View chapter. 3 Step 1: Specify the Problem and Develop a Program Theory. View chapter. 4 Step 2: Create and Revise Program Materials. View chapter. 5 Step 3 and Step 4: From Refining Program Components to Testing Effectiveness. View chapter. 6 Step 5: Dissemination of Findings and Program Materials: The ...
Interventions to support occupations are methods and tasks that prepare the client for occupational performance. ... Research, articles, and books. Expand your knowledge with the latest research, perspectives, and solutions for occupational therapy practitioners. SIS Quarterly Article.
In the fourth piece of this series on research study designs, we look at interventional studies (clinical trials). These studies differ from observational studies in that the investigator decides whether or not a participant will receive the exposure (or intervention). In this article, we describe the key features and types of interventional ...
Please describe your professional background and path. I grew up in a small town in Arkansas before heading to Cornell for college. Researching the diverse struggles of immigrants sparked my interest in finding solutions for social issues, and I moved to England to pursue an MPhil (Master of Philosophy in Evidence-Based Social Intervention and Policy Evaluation) at the University of Oxford.
Go to: 1. Introduction to types of intervention and their development. This book is about the evaluation of the effectiveness of health-related interventions. We use the term 'intervention' to apply to any activity undertaken with the objective of improving human health by preventing disease, by curing or reducing the severity or duration ...
Spitting is a socially and hygienically unappealing behavior displayed by some persons who have intellectual and developmental disabilities. In the present study, we conducted a test‐only functional analysis, intervention evaluation, and maintenance assessment with a 12‐year‐old child who had autism spectrum disorder (ASD) and displayed spitting among staff and peers in a special ...
7. Evaluating and Disseminating Intervention Research. Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate. The principles of science-based interventions cannot be overemphasized.