2a. What styles of leadership are most effective in establishing and sustaining AHDs?
2b. What styles of management are most effective in establishing and sustaining AHDs?
3a. How do practitioners in settings with AHD partnerships differ from practitioners in settings without AHD partnerships in terms of background, training, and expertise?
3b. How do academicians in settings with AHD partnerships differ from academicians in settings without AHD partnerships in terms of background, training, and expertise?
4a. What are the critical resources for establishing AHDs?
4b. What are the critical organizational environments for establishing AHDs?
5. What is the variability across AHDs in resources, and how does such variability matter?
6. What is the value of shared personnel in AHDs?
7. Which types of personnel contribute most to AHDs?
8. What arrangements for sharing personnel in AHDs have been successful?
9. What types of formal agreements have been used to establish AHDs?
10. What are the critical elements of formal agreements that have been used to establish AHDs?
11. How do the prevailing attitudes about practice and academia differ in settings with AHD partnerships versus settings without AHD partnerships? Do these attitudes influence the ability to establish and maintain AHDs?
12. What are the advantages to learning in AHD settings from the perspectives of students, faculty, and practitioners?
13. What are the advantages to working in AHD settings from the perspectives of faculty and practitioners?
14. Are academic and practice organizations prepared to jointly develop data for enhancing teaching, research, and practice?
The purpose of a research agenda focused on AHDs is to stimulate interest in and research focus on the AHD model; developing the evidence base for AHDs will bring us closer to answering the question “Do they make a difference?” The use of the logic model framework for this research agenda is intended to not only reflect the complexity of the overarching question “Do AHDs make a difference?” but to also provide a means of conceptualizing how such a question can be approached by breaking it down into its constituent parts: Does the AHD relationship make a difference in how the academic institution or governmental health agency operates and in what it does, how it does it, what it produces, and what the ultimate impacts may be? Although often depicted as a linear model, we use the logic model as an organizing framework, with no particular emphasis on or requirement for a specific sequence for research on AHDs to follow.
The published literature supports the notion that development of a research agenda can be one valuable means to stirring the research field—and its funders—to action. 23–25 Using a recent and related example, a research agenda for Public Health Services and Systems Research was published in the American Journal of Preventive Medicine in 2012, 23 the product of a very comprehensive, year-long process involving multiple stakeholders. A search in PubMed of articles using (“public health services” AND “systems research”) OR (“public health systems” AND “services research”) OR “public health systems research” showed an increase in the number of articles published, comparing pre– and post–research agenda publication time periods. For 2003 to 2012, the search returned 51 articles (5.1 per year); for 2013 to 2016 alone, the same search yielded 32 articles (8.0 per year). Although some of this increase may simply be attributable to better use of terms and keywords by authors and editors or changes in indexing at the National Library of Medicine, it is at least reasonable that a portion of the increase could be attributed to publication of the research agenda in 2012. Moreover, we believe the imprimatur of a research agenda that includes a focus on public health practice provides a clear signal to the academy that such research is fundable, that it is worthy of research attention by faculty, and that it can result in quality research publications that matter for promotion and tenure. In a more recent example, the Public Health Accreditation Board published its research agenda in 2015, 24 using as its starting point the list of research questions generated by the logic model of Joly et al. 21 mentioned earlier. With both examples—Public Health Services and Systems Research and the Public Health Accreditation Board—publication of the research agenda has been followed by action on the part of potential funders and partners, including the Robert Wood Johnson Foundation, the Centers for Disease Control and Prevention, and AcademyHealth. For example, the Robert Wood Johnson Foundation’s major 2015 call for proposals on systems of action that align the delivery and financing systems that support their current focus on a culture of health builds on their many funded studies and initiatives related to the research agenda for Public Health Services and Systems Research. 25 In addition, federal funders in the United States often cite a research agenda from a peer-reviewed journal as evidence of the need for more research on a particular topic or as a source of research ideas for potential applicants.
This work has several limitations. First, the use of the logic model framework may have artificially constrained the questions that were considered. Second, there is some degree of overlap across the various categories within the logic model, especially between outputs and outcomes. Although we attempted to maintain consistency in categorization, we realize other investigators, using slightly different definitions of the categories (e.g., outputs, outcomes), may have produced a different grouping. We do not believe such differences, however, would change the underlying substance of the questions. Third, several of the questions as posed suggest binary responses, although in reality it is more likely that answers would come in gradations. Finally, there are no delineations between questions that may be answered in a rather straightforward manner versus those that may be extremely difficult to answer clearly, and there is no suggested prioritization; however, questions were only included if they were deemed to be generally feasible for research purposes.
We believe a strength of this work is the broad participatory and iterative process used to produce the final set of questions. The questions have been informed by researchers focusing on AHDs, by practitioners who participate or have an interest in AHDs, and by 22 organizations represented under the auspices of the Council on Linkages, and they were open for public comment, scrutiny, and input.
This research agenda focused on AHDs provides a basis for formulating strategies for mobilizing collaborative research on the structure, functions, and impacts of AHDs. As Kronstadt et al. 24 described in developing a research agenda for public health agency accreditation, the research agenda can help connect the dots from the individual case studies on AHDs—which are becoming more numerous—to more sophisticated studies that are likely to produce a coherent picture of the overall impact of AHDs. The development and sustainment of AHDs provides a series of natural experiments that can offer real-world, mixed-methods (qualitative and quantitative) evidence of impact. Producing a research agenda can facilitate the identification of challenges, from defining and building appropriate data sets, to performing practice-based research with high internal validity, and to producing results that are generalizable. Using the research agenda as a template for building the evidence base on AHDs also presents researchers, practitioners, and funders with a common foundation for tracking subsequent findings in the published literature. An appropriate next step will be to determine what evidence currently exists related to each of the questions in the research agenda, which can serve as a springboard for exploring opportunities for supporting this research.
We acknowledge Scott Frank, Case Western Reserve University School of Medicine and the Shaker Heights Health Department, Cleveland, OH, for his contributions to earlier aspects of developing the research agenda.
Research Involvement and Engagement volume 10 , Article number: 97 ( 2024 ) Cite this article
Metrics details
Increasingly, researchers are involving children and young people in designing paediatric research agendas, but as far as we were able to determine, only one report exists on the academic impact of such an agenda. In our opinion, the importance of insight into the impact of research agendas designed together with children and young people cannot be overstated. The first aim of our study was therefore to develop a method to describe the academic impact of paediatric research agendas. Our second aim was to describe the academic impact of research agendas developed by involving children and young people.
We based our method on aspects of the Research Impact Framework developed by Kuruvilla and colleagues and the Payback Framework developed by Donovan and Hanney. We named it Descriptive Academic Impact Analysis of Paediatric Research Agendas, consisting of five steps: [ 1 ] Identification of paediatric research agendas, [ 2 ] Citation analysis, [ 3 ] Impact analysis, [ 4 ] Author assessment, and [ 5 ] Classification of the ease of determining traceability.
We included 31 paediatric research agendas that were designed by involving children and young people. These agendas were cited 517 times, ranging from 0 to 71 citations. A total of 131 new studies (25%) were published, ranging from 0 to 23 per paediatric research agenda, based on at least one of the research priorities from the agenda. Sixty studies (46%) were developed by at least one of the first, second, or last authors of the paediatric research agenda on which the studies were based. Based on their accessibility and the ease with which we could identify the studies as being agenda-based, we categorised 44 studies (34%) as easy, 62 studies (47%) as medium, and 25 studies (19%) as difficult to identify.
This study reports on the development of a method to describe the academic impact of paediatric research agendas and it offers insight into the impact of 31 such agendas. We recommend that our results be used as a guide for designing future paediatric research agendas, especially by including ways of tracing the academic impact of new studies concerning the agendas’ research priorities.
Increasingly, researchers are involving children and young people in designing paediatric research agendas. However, few researchers have described the impact of these agendas on the research undertaken. We strongly believe that it is important to know how such agendas affect research, what their impact is. One of the reasons paediatric research agendas are being designed is to create a clear overview of what the research questions are that need to be investigated - if this question is left unanswered, why bother designing the agendas at all? Therefore, we developed a 5-step tool to identify these agendas and to describe their impact. We tested our tool on 31 paediatric research agendas that were designed together with children and young people. These agendas were mentioned 517 times, 131 new studies were based on these agendas, and 60 studies were performed by the same authors who had designed the agendas. Of the new studies, we found 44 that were easy to identify, 62 that were fairly easy, and 25 that were difficult to identify as being based on paediatric research agendas. We hope that our results will serve as a useful guide for future researchers who aim to involve children and young people in designing research agendas. Especially, if ways are included to trace the impact of new studies in relation to the most important questions stated in the original research agendas.
Peer Review reports
Research agendas list research questions addressing knowledge gaps that require further investigation. They serve as a guide for scientists and funders to coordinate and focus research on areas deemed most relevant and impactful. In the past, they were designed primarily by researchers. Currently, the expertise of paediatric patients, their parents, or carers is recognised increasingly as a critical component in paediatric research [ 1 ]. In 1995, Chalmers stated that increased involvement of patients and the public in designing research agendas would likely result in a more open-minded approach about which research questions are worth addressing [ 2 ]. Later, he found a mismatch between the research questions considered as important by patients and the public and those addressed by researchers – a mismatch that resulted in research waste [ 3 ]. Increasingly, funding agencies value and researchers, journal editors, and policymakers demand that waste in research be reduced [ 3 , 4 , 5 , 6 ]. To address mismatches and reduce research waste, Chalmers and colleagues developed the James Lind Alliance (JLA) Research Priority Setting Partnerships (PSPs) [ 7 ]. The PSPs ensure that research questions of patients, carers, and healthcare professionals are prioritised.
Unfortunately, children and young people (CYP) are still rarely involved in designing research agendas on paediatric topics, so-called paediatric research agendas (PRAs). A systematic review published in 2017 showed that CYP were involved in only four PRAs [ 8 ]. Additionally, our recently published review showed that CYP are now involved in 22 additional PRAs since 2017 [ 9 ]. One of the reasons CYP are infrequently involved is because there is still a tendency to underestimate the value of their voices rooted in a belief that they are not competent to speak about health issues related to their bodies [ 10 ]. Partnering with CYP to design PRAs is crucial for understanding what is important to them. Research teams sometimes spend years developing agendas in partnership with patients, parents, and carers, but we found that little is known about whether the research priorities are elaborated upon [ 9 , 11 ]. The question whether the agenda had an impact on what research is undertaken remains [ 12 ]; this is what we refer to as the academic impact.
Describing the academic impact of PRAs holds significant importance. First, the academic impact should be described to evaluate whether the aim of reducing research waste is met. Second, describing the academic impact can help identify areas where progress has been made and where additional research efforts are needed. Third, when evaluating the academic impact, it can be determined whether funding agencies use the priorities. Describing the academic impact of the PRAs in which CYP are involved is particularly crucial. Pediatric research agendas that involve CYP require a significant investment of both time and resources [ 13 , 14 , 15 ]. Researchers need to be flexible when scheduling research meetings with CYP because they have multiple commitments, such as school and sports, during the typical working hours of researchers. This flexibility may involve conducting research activities during evenings, weekends, or school holidays when CYP are more readily available. Furthermore, researchers must overcome ethical considerations, such as power dynamics and facilitating environments, which helps to make CYP feel secure and confident to express themselves freely. Moreover, researchers should respect the authenticity of CYP’s voices [ 15 ]. Finally, one of the participants of the JLA PSP on Juvenile Idiopathic Arthritis considered the PSP a waste of time and money should the project end with the publication of the top 10 research priorities [ 16 ] and no attention was paid to whether the priorities were being used. Regrettably, little or no attention is given to reporting the academic impact of PRAs [ 9 ].
To the best of our knowledge, Geldof and colleagues stand alone in evaluating the academic impact of their PRA [ 12 ]. Staley and colleagues performed a qualitative evaluation of what happens after JLA PSPs, however, they did not conduct a systematic search or evaluation of whether the research priorities included in research agendas are elaborated on [ 17 ]. Geldof and colleagues evaluated the impact of their agenda six years after its initiation and three years after publishing the PRA [ 12 ]. Most of the studies based on their PRA were pharmaceutical-driven studies that focused on prioritising the development and validation of new medical treatments (71%). The authors concluded that the extent to which the current research landscape adequately represents the viewpoint of patients is debatable [ 12 ].
Identifying the academic impact of PRAs differs from identifying the research impact of other studies. Identifying the research impact of general research aims to demonstrate “the contribution that excellent research makes to society,” as defined by the Economic and Social Research Council. [ 18 ]. Nevertheless, the impact of such research is difficult to identify, partly because impacts originating from consecutive activities may accumulate in the longer term. Given this, it becomes difficult, sometimes even impossible, to ascertain which activity ultimately contributes to impact [ 19 ]. Several approaches, such as the Payback Framework developed by Donovan and Hanney [ 20 ] and the Research Impact Framework (RIF) developed by Kuruvilla and colleagues, [ 21 ] have proven robust and useful for describing research impact. The RIF is divided into four broad areas: research-related impact, policy impacts, service impacts and societal impacts. The checklist was developed for academics interested in describing and monitoring the impact of their research. The Payback Framework was originally developed to examine the impact of health services research, but has been adapted to assess the impact of research in other areas such as the social sciences [ 14 ]. The Payback Framework consists of five categories: [ 1 ] Knowledge, [ 2 ] Benefits to future research and research use, [ 3 ] Benefits from informing policy and product development, [ 4 ] Health and health sector benefits, and [ 5 ] Broader economic benefits. Both approaches consist of almost identical categories. Each approach can be used for different circumstances for which researchers may seek to describe impact [ 19 ]. The limitation of these approaches is that they are not specifically developed to describe the impact of PRAs on what research is undertaken after publishing the agenda.
The aim of our study was two-fold. First, to devise a reliable method for describing the academic impact of PRAs. Second, to describe the academic impact of PRAs designed together with CYP. We chose to focus only on describing the academic impact of the PRA in which CYP had been involved because we sought to improve the quality of CYP involvement in designing PRAs. Therefore, we believe that describing the academic impact of these agendas is of utmost importance.
We developed a method to describe the academic impact of PRAs based on the research-related impact of the RIF (Table 1 ) and the first two categories of the Payback Framework: Knowledge, and Benefits for Future Research and Research Use (Table 2 ). The categories of the RIF and the definitions of the Payback Framework that we used are highlighted in both tables. We used these categories because they resemble the academic impact of the PRAs we aimed to evaluate. The other three areas of the RIF and the Payback Framework are related to impacts other than academic impact (e.g., policy, services, and societal impact for the RIF, and policy, health, and economic impact for the Payback Framework). The authors of the RIF state that the themes can be adjusted, including removal, addition, grouping, or modification, to align with the research being described and relevant to assessment criteria [ 21 ]. In consultation with a medical information specialist and a methodologist, we portrayed our method as a Descriptive Academic Impact Analysis of Paediatric Research Agendas (DAIAPRA). The following section describes the development of the method.
In preparation for creating the impact tool, we defined the academic impact of PRAs using three identifiable factors: [ 1 ] The number of citations referencing the agenda, [ 2 ] The number of new studies based on the priorities, and [ 3 ] The variation in authorship between the original PRA and the subsequent studies. We opted for this approach because the data on citations, new studies, and research teams were readily accessible. Furthermore, we added an evaluative factor to the impact tool, which considered the ease of determining whether a study was based on one of the PRAs. It is important to note that impact encompasses various elements, but our study concentrated solely on those that could be quantified.
We defined the different steps of the impact tool, starting with Step 1: Identifying the PRAs. Next, we determined the data sources and metrics that would serve as the basis for describing impact. We based our PRA impact tool on three components of the section Research-Related Impacts: Publications and Papers, Type of Problem/Knowledge Addressed, and Research Networks and User Involvement (Table 1 ) [ 21 ]. Additionally, our tool drew upon two components of the Payback Framework: Journal Articles and Better Targeting of Future Research (Table 2 ).
In Step 2, we linked the component Publications and Papers of the RIF and the component Journal Articles of the Payback Framework to the number of citations generated by the PRA. Then, in Step 3, we linked the RIF component Type of Problem/Knowledge Addressed and the component Better Targeting of Future Research of the Payback Framework to new, PRA-based studies. Finally, in Step 4, we linked the components Research Networks and User Involvement to the difference in authorship between the PRA and the new studies. To make the impact analysis tool readily accessible, we included only publicly available, easily assessable, metrics or metrics. To determine whether a study should address the priority of the PRA, we included Step 5. We based Steps 1, 3, and 5 on a more subjective evaluation; hence, these steps should be performed independently by at least two people. We based Steps 2 and 4 on objective variables that could only be interpreted in one way. To use the method efficiently, we recommend Steps 3 and 5 to be performed simultaneously (Fig. 1 ). The steps of the DAIAPRA are explained in more detail in the section below.
Descriptive academic impact analysis of paediatric research agendas
The research team, in partnership with an information librarian, developed the literature search strategy. The strategy utilised Medical Subject Headings and keywords for ‘children’, ‘priority setting partnerships’, and ‘paediatric research agenda’. The search terms from each category were combined using the “OR” operator, and the three categories were linked using the “AND” operator. The search was conducted on MEDLINE, EBSCOhost, Web of Science and Google Scholar. We utilised both forward and backwards citation chasing to ensure that we did not miss any important PRAs. This was done by checking the reference lists of the included studies. Furthermore, the James Lind Alliance page, which lists all JLA Priority Setting Partnerships, was reviewed to identify additional Priority Research Areas that were not captured by our search strategy. The resulting articles were then uploaded to the Rayyan screening tool, developed by Qatar Computing Research Institute (Doha, Qatar), and duplicate entries were eliminated. Several inclusion criteria were applicable in the process of identifying the research agendas (Table 3 ). The above described search strategy was utilized in our recently published review [ 9 ]. To include more pediatric research agenda in this study we added the PRAs identified by Odgers and colleagues in which CYP were included, and we repeated the same search after publication of our review.
We uploaded the identified PRAs to Scopus, SciVal, and Altmetric. Scopus is an expertly curated abstract and citation base. It provides access to reliable data, metrics, and analytical tools. We extracted data on citations from the database. A medical information specialist from the University of Groningen helped us download the desired information from all the studies into a Microsoft Excel file format.
Next, we screened the studies that cited one of the PRAs and examined the context in which the PRA was cited. Two researchers (LP and SB) independently screened the citations and examined whether the PRA was referred to because the study addressed one of the priorities. When disagreements arose, the researchers engaged in discussion until they reached a consensus. The inclusion criteria for Step 3 can be found in Table 4 .
We compared the authors of the studies included in Step 3 to the authors of the PRA on which the studies were based. We examined whether the first, second, or last author of the PRA was involved in the new studies that were based on a specific PRA.
Finally, we classified the studies that were included during the impact analysis into three categories based on the ease of tracing whether a study addresses a research priority of the PRA. We distinguished three categories for ease of tracing: easy - the research priority is explicitly stated in the publication, medium - the research priority is not explicitly stated but we could infer it from the text and difficult - the research priority is not stated and could not be inferred from the text.
We included 31 PRAs in which CYP were involved [ 13 , 14 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 ]. Twenty-two PRAs were included because we had identified them in our recently published review [ 9 ], four were identified by Odgers and colleagues [ 8 ], and five were identified after the publication of our review. At the time of this study, the newest PRA was published in November 2022 and the oldest in June 2010. The themes of the included PRAs are shown in Table 5 . Little variation was found in the methods used to develop the PRAs. The JLA method was most frequently used ( n = 22, workshops ( n = 2), focus groups ( n = 2), Research Prioritisation by Affected Communities (RPAC) method ( n = 1), online survey ( n = 1), or a combination of the methods mentioned ( n = 3).
The 31 PRAs were cited 517 times, ranging from 0 to 71 citations per PRA, with a mean of 17 citations per PRA (Supplemental Material 1). The agenda of Batchelor and colleagues received the highest number of citations. Four PRAs have not yet been cited [ 22 , 23 , 24 , 25 ]. Figure 2 presents an overview of the citation count for each PRA.
Citations per paediatric research agenda
Cumulatively, the 31 agendas were cited 517 times and 131 new studies (25%) were published based on at least one of the research priorities from the PRAs, ranging from 0 to 23 new studies per PRA, with a mean of 4 new studies per PRA. Hollis and colleagues’ PRA attracted most of the new studies [ 26 ]. Eight PRAs, however, did not yield new studies [ 13 , 22 , 23 , 24 , 25 , 27 , 28 , 29 ] as we show in Fig. 3 .
New studies per paediatric research agenda
Sixty studies (46%) were developed by at least one of the first, second, or last authors of the PRA on which the study was based. Seventy-one studies (54%) were developed by other researchers who did not author the PRA (Fig. 4 and Supplemental Material 2). It is apparent from Fig. 3 that some research teams, such as Baldacchino [ 30 ], Lam [ 31 ], Medlow [ 32 ], Birnie [ 14 ], Peeks [ 33 ] Aldiss [ 34 ], Parsons [ 35 ], Layton [ 36 ] and Batchelor [ 37 ], developed most of the new studies themselves, based on their own PRAs. For Peeks and colleagues, 10 of the 12 new publications included the original author of the agenda. Batchelor and colleagues’ agenda resulted in 18 new studies, 16 of which included the original author of the agenda. The authors developed most of the new studies themselves (89%). The agendas of authors who had not elaborate on the research priorities themselves, as was the case for eight of the agendas, resulted in the publication of an average of two new studies. The PRA developed by Hollis and colleagues was most successful and resulted in 23 new studies; only one of which was published by these authors themselves (6%).
Paediatric research agendas authored by the same authors versus other authors
Out of the 131 studies analysed (Supplementary Material 2), we could classify 44 studies (34%) as easy to trace, meaning that the article directly quoted the research priority from the agenda that they were addressing. Sixty-two studies (47%) we classified as medium, indicating that even though the research priority was not directly stated in the article, it was clear to us that the priority focused on was based on the aim and research question of the study. The remaining 25 studies (19%) we classified as difficult, meaning that although in the publication the authors claimed that following up on research is deemed as most important by the agendas, it was not clear to us which of the research priorities the studies focused on.
To the best of our knowledge, this is the first study to evaluate the academic impact of multiple PRAs. To achieve this, we developed the DAIAPRA, a five-step tool. The first step identifies the PRAs, followed by three steps that evaluate the academic impact of PRAs. The last step classifies the ease of tracing whether a study addresses a research priority of the PRA. Using this tool, we found that the citations ranged from 0 to 71 per PRA. New studies based on a PRA ranged from 0 to 23. Furthermore, 46% of the new studies were developed by at least one of the first, second, or last authors of the PRA on which the study was based. Finally, only 34% of the new studies explicitly stated the research priority the study focused on, indicating that in these cases it was easy to trace it.
We found that the number of new studies based on a PRA varied between 0 and 23. A factor that might have influenced this wide range is that we included PRAs from 2010 up to and including 2022. Perhaps older PRAs already had more impact than newer ones. New agendas still need time to create impact. Another possible influence is that new agendas were published during the COVID-19 pandemic. Raynaud and colleagues showed a dramatic increase in COVID-19 publications and a substantial decrease in non-COVID-19 research during that time [ 51 ]. Another element that might play a significant role was whether the agenda received advance funding for elaborating on the research priorities. If that was the case, researchers could start elaborating on the research priorities immediately, instead of first finding appropriate research funders. Moreover, some funding programmes set their priorities for research and then advertise for research teams to conduct the research [ 52 ], which might have resulted in certain agendas being studied more frequently. Another aspect that caught our attention is that eight of the ten PRAs leading to the most new studies were developed in collaboration with the JLA, suggesting that partnerships with established organizations like the JLA can greatly amplify the impact and reach of PRAs. Another factor that could have played a role is that some researchers were unaware of, and thus paid no attention to, the dissemination and implementation of the PRAs [ 53 ]. We interviewed researchers and CYP, who had designed a PRA together, about the academic impact of their PRAs. Authors of the PRA could provide valuable information and examples of impact of their own work, which would otherwise have been unavailable to us. That is why we interviewed researchers and CYP, who had designed a PRA together, about the academic impact of their PRAs. We found that researchers were hardly aware of new studies based on their PRAs [ 11 ]. This lack of awareness is easily addressed by emphasising the importance of disseminating and implementing the PRAs. The awareness of and emphasis on the implementation of PRAs might enhance their academic impact. The JLA guidebook was updated in 2021, and Chaps. 9, 10, and 11 deal with the dissemination and publication of the research agenda, prioritising the research funders and long-term follow-up [ 52 ]. Concentrating on the phase following the PRAs’ design might already create the awareness that researchers need to take responsibility for encouraging the research and funding community to address the research priorities.
Another strong argument in favour of prioritising the implementation of the research agenda, is to ensure continuous and transparent communication with the CYP involved. Keeping CYP updated regarding the progress of research priorities is essential. It shows them that their input is valued and has contributed to meaningful changes [ 54 ]. Mawn and colleagues found that researchers can be criticised for failing to engage or update CYP as research progresses [ 55 ]. The result of doing nothing is that CYP may lose their trust. They may get the impression that what concerns them is unimportant, or that it may not be as important or valued by others in a position to fund research [ 53 ].
Interestingly, almost half of the new studies based on the PRAs were developed by the first, second, or last author of the PRAs. Researchers who design a PRA can use the agenda as a roadmap for their work, helping them to identify research questions and develop studies that are likely to contribute to the broader goals of their field [ 56 ]. Our study opens the door to discussion about whether the academic impact of the PRAs is achieved when a substantial portion of the new studies that are based on the PRAs, are authored by the same researchers as those who developed the agendas. The primary aim of a PRA is to change the broader context of research; it can be questioned whether this aim is achieved when nearly half of the new studies are published by the same researchers. We believe that an important distinction should be made when evaluating the academic impact of PRAs. The expected academic impact of a PRA depends on whether it is designed by an entire research field as opposed to designed by a specific research team. When all key researchers are involved in the design of a PRA, it is inherent that they are the ones who elaborate on the priorities together with their research teams. This approach is particularly fitting for a research agenda designed within a niche. However, when a research agenda is designed for a broad subject such as diabetes, it is practically impossible to include all key stakeholders. Therefore, one might expect that the research priorities of that agenda are elaborated by researchers other than those who designed the agenda. To date, it is impossible to determine whether the PRA was designed by an entire research field or by a specific research team. Consequently, placing the academic impact of a research agenda in context becomes more challenging.
If researchers, who were not involved in designing a PRA, continue to address research questions that they consider important instead of focusing on priorities of the agenda, we question whether the design of a PRA has the intended impact of changing the broader context of research.
We developed the DAIAPRA because no reliable method was available that described the academic impact of PRAs. Our method focuses on quantifiable aspects of impact, such as citations, new studies based on the PRA, and the difference in authorship between the PRA and the new studies. We acknowledge that the method does not consider all potential factors that may contribute to the impact of a PRA, such as conference presentations on the PRAs, or receiving funding for the priorities. It should be noted that the method is still in its infancy. Additional metrics could be incorporated to provide a more complete understanding of academic impact.
While we managed to evaluate the academic impact of 31 PRAs, our study has several limitations. The first issue was that the academic impact of the PRAs could not be compared one-on-one because they were published in different years and involved different research areas. Furthermore, it is important to recognise that our study offers an initial perspective on the academic impact of PRAs. With this first evaluation, we included quantifiable aspects of impact only. The qualitative forms (such as: improved collaboration, influence on policy and practice or increasing public understanding of research) of impact that are not measured with our method are nonetheless crucial for understanding the full impact of a research agenda. Therefore, we acknowledge that potentially unknown positive or negative impacts are missed. Our goal was not to classify the agendas according to their levels of impact, but rather to present a comparative analysis of their respective academic impacts. The original inclusion criterion was limited to PRAs with CYP below the age of eighteen. This initial search delivered a limited amount of results. Five more studies were then included, two PRAs in which the age of the CYP was not specified, and an additional three in which the age of the CYP was below 20 or 25 years. Currently, it is challenging to specifically examine research agendas involving only CYP under the age of 18, as the age of the CYP are not always clearly described in the PRA. Furthermore, we acknowledge that by evaluating the English literature only, we may have excluded valuable research published in languages other than English. However, given that English is the predominant language of academic communication and our focus was on studies published in recognised academic databases, it was necessary to prioritise English-language publications. Consequently, important work in non-English or non-academic sources may have been overlooked. The objective of this study was to initiate discussion and create awareness about the academic impact of research agendas. We did not aim to classify research agendas in terms of ‘good’ or ‘bad’ academic impact because the extent to which academic impact is achieved depends on many factors. All those factors must be considered when comparing the agendas, making it challenging to compare the PRAs.
The DAIAPRA can be used by other researchers when evaluating the academic impact of research agendas. The findings of our study once again indicated the importance of the post-PRA phase. A research project does not end when the top 10 research priorities have been agreed upon. Researchers should disseminate the results of their PRAs to increase exposure to potential funders and researchers. In addition, our results showed that it is difficult to determine whether a research priority is elaborated on. This could argue in favour of establishing a system that enables us to trace a study to determine whether it is based on one of the priorities of a research agenda. For example, researchers could include a statement in the PRAs specifying how the priorities listed should be cited or referenced when researchers elaborate on one of the priorities. Providing an overview of which priorities are addressed by whom and in which country, for each research agenda accessible to everyone, could contribute to more transparency. This clarity also systematically highlights the remaining priorities that require further investigation. Our results indicated that it is challenging to contextualise the academic impact of a PRA because it is unclear who designed the PRA. Therefore, we suggest adding a statement indicating whether the agenda is designed by all key stakeholders or researchers, or whether it has been designed by a specific research team.
Future studies should focus on why some PRAs generated more new studies than others, to guide researchers in creating academic impact. We focused only on the academic impact of the PRAs. Future research should also focus on the policy and societal impact of the PRAs. This is important because, alarmingly, an estimated 85% of medical research evidence never finds its way into clinical practice [ 5 ].
Our study contributes to the development of a methodology to evaluate the academic impact of PRAs and we provide initial insight into the academic impact of 31 PRAs. Our findings could be used to inform future PRA design, especially by incorporating provisions for tracing the academic impact of new studies related to the research priorities outlined in the agenda. Overall, our study provides a valuable foundation for further research into the evaluation of academic impact in the field of paediatric research.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Descriptive Academic Impact Analysis of Paediatric Research Agendas
James Lind Alliance
Pediatric research agenda
Research Impact Framework
Tallon D, Chard J, Dieppe P. Relation between agendas of the research community and the research consumer. Lancet 2000 June 10,;355:2037–40.
Chalmers I. What do I want from health research and researchers when I am a patient? BMJ. 1995;310:1315–8.
Article CAS PubMed PubMed Central Google Scholar
Chalmers I, Glasziou P, Library JL, Lind J. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;07–04:374:86–9.
Article Google Scholar
Zurynski Y, Smith CL, Knaggs G, Meulenbroeks I, Braithwaite J. Funding research translation: how we got here and what to do next. Australian New Z J Public Health 2021-07-12;45(5):420.
ZonMW, Procedure voor A. 2019; https://www.zonmw.nl/fileadmin/zonmw/documenten/Corporate/ZonMw_A4_ProcedureAanvragers_201906_def_Projectnet_MijnZonMw_MP.pdf
Hughes A, Martin B. Enhancing Impact: The Value of Public Sector R&D. 2012; https://www.jbs.cam.ac.uk/2012/enhancing-impact-the-value-of-public-sector-rd/ . Accessed February, 2022.
Chalmers I, Atkinson P, Fenton M, Firkins L, Crowe S, Cowan K. Tackling treatment uncertainties together: the evolution of the James Lind Initiative, 2003–2013. J R Soc Med 2013-07-03;106(12):482.
Odgers HL, Tong A, Lopez-Vargas P, Davidson A, Jaffe A, McKenzie A, et al. Research priority setting in childhood chronic disease: a systematic review. Arch Dis Child. 2018;103(10):942–.
Article PubMed Google Scholar
Postma L, Luchtenberg ML, Verhagen AAE, Maeckelberghe EL. Involving children and young people in paediatric research priority setting: a narrative review. Bmjpo 2022;6(1).
Mohr WK, Suess S, THE CONUNDRUM OF CHILDREN IN THE US HEALTH CARE SYSTEM. Nurse Ethics 2001 May 8,;3:196–210.
Postma L, Luchtenberg ML, Verhagen AAE, Maeckelberghe ELM. ‘It’s Powerful’ The impact of involving children and young people in developing paediatric research agendas: A qualitative interview study. Health Expect 2024 -04;27(2).
Geldof J, Leblanc J, Lucaciu L, Segal J, Lees CW, Hart A. Are we addressing the top 10 research priorities in IBD? Frontline Gastroenterol 2021;12(7).
Schilstra CE, Sansom-Daly UM, Schaffer M, Fardell JE, Anazodo AC, McCowage G et al. We have all this knowledge to give, so use us as a resource: partnering with adolescent and young adult Cancer survivors to Determine Consumer-Led Research priorities. J Adolesc Young Adult Oncol 2021:211–22.
Birnie KA, Dib K, Ouellette C, Dib MA, Nelson K, Pahtayken D, et al. Partnering for Pain: a Priority setting Partnership to identify patient-oriented research priorities for pediatric chronic pain in Canada. CMAJ open. 2019;7(4):E654–64.
Article PubMed PubMed Central Google Scholar
Montreuil M, Bogossian A, Laberge-Perrault E, Racine E. A review of approaches, strategies and ethical considerations in Participatory Research with Children. Int J Qualitative Methods. 2021;20:160940692098796.
Aussems K, Schoemaker CG, Verwoerd A, Ambrust W, Cowan K, Dedding C. Research agenda setting with children with juvenile idiopathic arthritis: lessons learned. Child: Care Health Dev 2021:68–79.
Staley K, Crowe S, Crocker JC, Madden M, Greenhalgh T. What happens after James Lind Alliance Priority setting partnerships? A qualitative study of contexts, processes and impacts. Res Involv Engagem 2020 Jan 01,;6(1):1–41.
Defining impact. https://www.ukri.org/councils/esrc/impact-toolkit-for-economic-and-social-sciences/defining-impact/ . Accessed November 12, 2022.
Greenhalgh T, Raftery J, Hanney S, Glover M. Research impact: a narrative review. BMC Med 2016-05-23;14(1).
Donovan C, Hanney S. The ‘Payback Framework’ explained. Res Evaluation. 2011;–09(3):181.
Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res. 2006;6(1):12.
Vera San Juan N, Oram S, Pinfold V, Temple R, Foye U, Simpson A et al. Priorities for Future Research about screen use and adolescent Mental Health: a participatory prioritization study. Front Psychiatry 2022 -05;13.
Smith EMD, Egbivwie N, Cowan K, Ramanan AV, Pain CE. Research priority setting for paediatric rheumatology in the UK. Lancet Rheumatol. 2022;4(7):e517–24.
Article CAS PubMed Google Scholar
Ismail D, Mcateer H, Majeed-ariss R, Mcphee M, Griffiths CEM, Young HS. Research priorities and identification of a health‐service delivery model for psoriasis from the UK Psoriasis Priority setting Partnership. Clin Exp Dermatol. 2020-10-06;46(2):276.
von Scheven E, Nahal BK, Cohen IC, Kelekian R, Franck LS. Research questions that Matter to us: priorities of young people with chronic illnesses and their caregivers. Pediatr Res. 2021;89(7):1659–63.
Hollis C, Sampson S, Simons L, Davies EB, Churchill R, Betton V et al. Identifying research priorities for digital technology in mental health care: results of the James Lind Alliance Priority setting Partnership. Health policy 1984.
Schoemaker CG, Armbrust W, Swart JF, Vastert SJ, van Loosdregt J, Verwoerd A, et al. Dutch juvenile idiopathic arthritis patients, carers and clinicians create a research agenda together following the James Lind Alliance method: a study protocol. Pediatr Rheumatol Online J. 2018;16(1):57.
Knight SR, Metcalfe L, O’Donoghue K, Ball ST, Beale A, Beale W, et al. Defining priorities for Future Research: results of the UK kidney Transplant Priority setting Partnership. PLoS ONE. 2016;11(10):e0162136.
Manikam L, Shah R, Reed K, Santini G, Lakhanpaul M. Using a co-production prioritization exercise involving south Asian children, young people and their families to identify health priorities requiring further research and public awareness. Health Expectations: Int J Public Participation Health Care Health Policy. 2017;20(5):852.
Vella-Baldacchino M, Perry DC, Roposch A, Nicolaou N, Cooke S, Ellis P, et al. Research priorities in children requiring elective surgery for conditions affecting the lower limbs: a James Lind Alliance Priority setting Partnership. BMJ open. 2019;9(12):e033233.
Lam JR, Liu B, Bhate R, Fenwick N, Reed K, Duffy JMN, et al. Research priorities for the future health of multiples and their families: The Global Twins and multiples Priority setting Partnership. Ultrasound Obstet Gynecology: Official J Int Soc Ultrasound Obstet Gynecol. 2019;54(6):715–21.
Article CAS Google Scholar
Medlow S, Patterson P. Determining research priorities for adolescent and young adult cancer in Australia. Eur J Cancer Care. 2015;24(4):590–9.
Peeks F, Boonstra WF, de Baere L, Carøe C, Casswall T, Cohen D, et al. Research priorities for liver glycogen storage disease: an international priority setting partnership with the James Lind Alliance. J Inherit Metab Dis. 2020;43(2):279–89.
Aldiss S, Fern LA, Phillips RS, Callaghan A, Dyker K, Gravestock H, et al. Research priorities for young people with cancer: a UK priority setting partnership with the James Lind Alliance. BMJ open. 2019;9(8):e028119.
Parsons S, Thomson W, Cresswell K, Starling B, McDonagh JE, Barbara Ansell Natl NA. What do young people with rheumatic disease believe to be important to research about their condition? A UK-wide study. Pediatr Rheumatol 2017;15.
Layton A, Eady EA, Peat M, Whitehouse H, Levell N, Ridd M, et al. Identifying acne treatment uncertainties via a James Lind Alliance Priority Setting Partnership. BMJ open. 2015;5(7):e008085.
Batchelor JM, Ridd MJ, Clarke T, Ahmed A, Cox M, Crowe S, et al. The Eczema Priority setting Partnership: a collaboration between patients, carers, clinicians and researchers to identify and prioritize important research questions for the treatment of eczema. Br J Dermatology (1951). 2013;168(3):577–82.
Pagnamenta E, Longhurst L, Breaks A, Chadd K, Kulkarni A, Bryant V, et al. Research priorities to improve the health of children and adults with dysphagia: a National Institute of Health Research and Royal College of Speech and Language Therapists research priority setting partnership. BMJ Open. 2022;12(1):e049459.
Grant A, Crane M, Laupacis A, Griffiths A, Burnett D, Hood A, et al. Engaging patients and caregivers in research for pediatric inflammatory bowel disease: top 10 research priorities. J Pediatr Gastroenterol Nutr. 2019;69(3):317–23.
Rankin G, Summers R, Cowan K, Barker K, Button K, Carroll SP, et al. Identifying priorities for physiotherapy research in the UK: the James Lind Alliance Physiotherapy Priority Setting Partnership. Physiotherapy 2020;107:161–8.
Gill PJ, Bayliss A, Sozer A, Buchanan F, Breen-Reid K, De Castris-Garcia K, et al. Patient, caregiver, and clinician participation in prioritization of research questions in pediatric hospital medicine. JAMA Netw Open. 2022;5(4):e229085.
Lopez-Vargas P, Tong A, Crowe S, Alexander SI, Caldwell PHY, Campbell DE, et al. Research priorities for childhood chronic conditions: a workshop report. Arch Dis Child 2019;104(3):237–45.
Lim AK, Rhodes S, Cowan K, O’Hare A. Joint production of research priorities to improve the lives of those with childhood onset conditions that impair learning: the James Lind Alliance Priority Setting Partnership for ‘learning difficulties’. BMJ open 2019;9(10):e028780.
Fackrell K, Stratmann L, Kennedy V, MacDonald C, Hodgson H, Wray N, et al. Identifying and prioritising unanswered research questions for people with hyperacusis: James Lind Alliance Hyperacusis Priority Setting Partnership. BMJ Open. 2019;9(11):e032178.
Obeid N, McVey G, Seale E, Preskow W, Norris ML. Cocreating research priorities for anorexia nervosa: the Canadian Eating Disorder Priority Setting Partnership. Int J Eat Disord 2020;53(5):392–402.
Shattuck PT, Lau L, Anderson KA, Kuo AA. A national research agenda for the transition of youth with autism. Pediatrics. 2018;141(Suppl 4):S355–61. https://doi.org/10.1542/peds.2016-4300M . PMID: 29610417.
Tunnicliffe DJ, Singh-Grewal D, Craig JC, Howell M, Tugwell P, Mackie F, et al. Healthcare and research priorities of adolescents and young adults with systemic lupus erythematosus: a mixed-methods study. J Rheumatol. 2017;44:444–51.
Morris RL, Stocks SJ, Alam R, Taylor S, Rolfe C, Glover SW, et al. Identifying primary care patient safety research priorities in the UK: a James Lind Alliance Priority Setting Partnership. BMJ open 2018;8(2):e020870.
Finer S, Robb P, Cowan K, Daly A, Shah K, Farmer A. Setting the top 10 research priorities to improve the health of people with Type 2 diabetes: a Diabetes UK–James Lind Alliance Priority Setting Partnership. Diabet Med. 2018;35(7):862–70.
Elwyn G, Crowe S, Fenton M, Firkins L, Versnel J, Walker S, et al. Identifying and prioritizing uncertainties: patient and clinician engagement in the identification of research questions. J Eval Clin Pract. 2018;16(3):627–31.
Raynaud M, Goutaudier V, Louis K, Al-Awadhi S, Dubourg Q, Truchot A et al. Impact of the COVID-19 pandemic on publication dynamics and non-COVID-19 research production. BMC Med Res Methodol 2021-11-22;21(1).
James Lind Alliance. The James Lind Alliance Guidebook. 2021; https://www.jla.nihr.ac.uk/jla-guidebook/ . Accessed April, 2021.
Gibson F, Aldiss S. What are the consequences of not responding to Research Priority setting exercises? Cancer care Res Online. 2023;3(1):e037.
Jongsma K, van Seventer J, Verwoerd A, van Rensen A. Recommendations from a James Lind Alliance priority setting partnership - a qualitative interview study. Res Involv Engagem. 2020;6(1):68.
Mawn L, Welsh P, Kirkpatrick L, Webster LAD, Stain HJ. Getting it right! Enhancing youth involvement in mental health research. Health Expect. 2015-07-22;19(4):908.
Ertmer PA, Glazewski KD. Developing a research agenda: contributing new knowledge via intent and focus. J Comput High Educ. 2014;26(1):54–68.
Download references
We would like to express our sincere appreciation to Robin Ottjes, Medical Information Specialist, for his remarkable efforts in developing an Excel format that facilitated the extraction of all relevant information from studies that cited one of the included paediatric research agendas. Additionally, we would like to thank Selin Bogers for her valuable contributions as an independent researcher for steps three, four and five. Her contribution ensured an accurate assessment of the academic impact of the paediatric research agendas. We extend our gratitude to Titia van Wulfften Palthe, PhD, for correcting the English manuscript.
This study was conducted without external financial support.
Authors and affiliations.
University of Groningen, University Medical Center Groningen, Beatrix Children’s Hospital, Hanzeplein 1, Groningen, 9713 GZ, The Netherlands
L. Postma, M. L. Luchtenberg, A. A. E. Verhagen & E. L. M. Maeckelberghe
You can also search for this author in PubMed Google Scholar
LP contributed to the conception, design, analysis, and interpretation of the data. She drafted the manuscript. SB contributed to Steps 3, 4, and 5 of the DAIAPRA analysis as an independent researcher (analysis of the data). ML, EV and EM contributed to the conception, design, and interpretation of the data and revised the manuscript. All the authors read and approved the final manuscript.
Correspondence to L. Postma .
Ethics approval and consent to participate.
Not applicable.
Competing interests.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Postma, L., Luchtenberg, M.L., Verhagen, A.A.E. et al. The academic impact of paediatric research agendas: a descriptive analysis. Res Involv Engagem 10 , 97 (2024). https://doi.org/10.1186/s40900-024-00630-x
Download citation
Received : 27 February 2024
Accepted : 22 August 2024
Published : 20 September 2024
DOI : https://doi.org/10.1186/s40900-024-00630-x
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 2056-7529
Implementation Science Communications volume 5 , Article number: 98 ( 2024 ) Cite this article
7 Altmetric
Metrics details
Implementation science scholars have made significant progress identifying factors that enable or obstruct the implementation of evidence-based interventions, and testing strategies that may modify those factors. However, little research sheds light on how or why strategies work, in what contexts, and for whom. Studying implementation mechanisms—the processes responsible for change—is crucial for advancing the field of implementation science and enhancing its value in facilitating equitable policy and practice change. The Agency for Healthcare Research and Quality funded a conference series to achieve two aims: (1) develop a research agenda on implementation mechanisms, and (2) actively disseminate the research agenda to research, policy, and practice audiences. This article presents the resulting research agenda, including priorities and actions to encourage its execution.
Building on prior concept mapping work, in a semi-structured, 3-day, in-person working meeting, 23 US-based researchers used a modified nominal group process to generate priorities and actions for addressing challenges to studying implementation mechanisms. During each of the three 120-min sessions, small groups responded to the prompt: “What actions need to be taken to move this research forward?” The groups brainstormed actions, which were then shared with the full group and discussed with the support of facilitators trained in structured group processes. Facilitators grouped critical and novel ideas into themes. Attendees voted on six themes they prioritized to discuss in a fourth, 120-min session, during which small groups operationalized prioritized actions. Subsequently, all ideas were collated, combined, and revised for clarity by a subset of the authorship team.
From this multistep process, 150 actions emerged across 10 priority areas, which together constitute the research agenda. Actions included discrete activities, projects, or products, and ways to shift how research is conducted to strengthen the study of implementation mechanisms.
This research agenda elevates actions to guide the selection, design, and evaluation of implementation mechanisms. By delineating recommended actions to address the challenges of studying implementation mechanisms, this research agenda facilitates expanding the field of implementation science, beyond studying what works to how and why strategies work, in what contexts, for whom, and with which interventions.
Peer Review reports
This research agenda operationalizes a set of activities to strengthen the implementation science field’s focus on why and how strategies work.
The research agenda addresses the following activities: accumulating knowledge, innovating methods and overcoming design challenges, improving measurement, providing guidance for specifying causal mechanisms, increasing focus on theorizing, engaging the policy and practice community, engaging funders, building capacity, enhancing equity, and effectively disseminating methods.
Studying implementation mechanisms can promote pragmatic strategy development, equitable processes and outcomes, and policy relevance by clarifying pathways for overcoming contextually specific barriers and achieving outcomes of interest.
Some see implementation science as not just a pathway, but the pathway for advancing equity in healthcare access and outcomes, and equitable population health [ 1 ]. Although this research pathway can lead to equity, it is certainly not guaranteed, and in fact, like many fields, most implementation science theories, models, and frameworks did not center equity until recently [ 2 ]. This omission leaves implementation studies and strategies vulnerable to unintended consequences (or ripple effects) that might actually exacerbate disparities [ 3 , 4 ]. The field of implementation science has made significant progress in this regard. Scholars like Woodward et al. [ 5 ] offer practical guidance for incorporating health equity domains into implementation determinant frameworks, and Gaias et al. [ 6 ] proposed a process to evaluate and adapt implementation strategies to promote equity. Walsh-Bailey is developing a resource to guide the integration of equity into strategy selection, design, and specification [ 7 ]. Moreover, numerous efforts collate factors that enable or obstruct the implementation of evidence-based interventions [ 8 , 9 , 10 ], and compile behavior change techniques and implementation strategies that may modify these factors [ 11 , 12 , 13 , 14 , 15 , 16 ]. Even with these advances, little research sheds light on how or why strategies work, in what contexts, and for whom [ 17 , 18 , 19 , 20 , 21 ]. Studying implementation mechanisms, or the processes through which strategies exert their effects on outcomes, can address this research gap to meaningfully advance the field of implementation science and enhance its value in facilitating equitable policy and practice change. Mechanistic implementation research can identify potential mediators or moderators that illuminate differential strategy impact based on factors such as gender, race/ethnicity, socioeconomic status, and center on understanding equitable approaches to implementation science and practice.
One of the principles of implementation science is that context matters , and by nature, each context is unique. The people, their interactions, their physical environment and resources, and their history and beliefs about the future, are among the subset of aspects that are diverse among clinics in the same organization, schools in the same district, and hospitals in the same health system. As implementation science evolves, complex and costly strategies are increasingly being deployed, making equity issues especially pronounced for those receiving care in under-resourced settings [ 20 ]. Evidence suggests that tailored implementation may be superior to standardized approaches [ 22 , 23 ], but tailoring in the absence of understanding strategy mechanisms may compromise outcomes for some or undermine scaling positive outcomes. Establishing strategy mechanisms of action means that the essence of how a strategy works is known and empirically supported. Therefore, when tailoring, adapting, or modifying to fit different contexts, the essence of the strategy’s operation can be retained. When strategies are streamlined to fit contextual constraints or adapted to be a better fit, the mechanism ought to be activated if we are to expect the same outcome. Conversely, if strategies underperform or fail to work in certain settings, unpacking the causal pathway can lead to isolation of contextual factors that threaten mechanism activation or demand a new mechanism altogether. This is not to say that simply studying mechanisms will guarantee equitable outcomes, but in studying them, equitable implementation processes and outcomes are more likely.
To this end, in 2017, the Society for Implementation Research Collaboration (SIRC) conference theme centered implementation mechanisms to elevate dialogue and research about, “What Makes Implementation Work and Why?” [ 24 ]. SIRC is a not-for-profit society that convenes scholars, practitioners, policy makers, and others interested in advancing rigorous evaluation of implementation initiatives. SIRC’s call to action was motivated by the observation across trials that heterogeneity is the rule, not the exception, and weak main effects result. Thus, advancing the study of implementation mechanisms may offer benefits to research and practice communities. For example, identifying and evaluating mechanisms can help researchers learn from null studies [ 17 ] and optimize strategies for subsequent efforts or different objectives (e.g., equity, effectiveness, scalability) [ 25 ]. Articulating mechanisms can guide the practice community to identify the impact that strategies might have on their outcomes and inform their design or tailoring of strategies to the local context [ 26 , 27 ]. Despite this call, only 7% of abstracts included at the subsequent (2019) SIRC conference [ 28 ] explicitly “featured the study of implementation mechanisms” [ 29 ].
In response to this need to advance the study of mechanisms, we convened an Agency for Healthcare Research and Quality-funded 3-year conference series titled, “Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration” [ 30 ]. The specific aims were to (1) develop a research agenda on implementation mechanisms, and (2) disseminate the research agenda to research, policy, and practice audiences. Similar to processes used for generating related research agendas (e.g., sustainability research [ 31 ]), concept mapping was employed in the first two years of the grant to elucidate challenges to advancing implementation mechanisms research [ 30 , 32 ] and to organize these ideas into conceptually distinct clusters. Reported in more detail elsewhere [ 30 , 32 ], concept mapping analyses yielded a 12-cluster solution that organized 105 challenge statements within five “super clusters” of mechanism research domains: (1) Accumulating Knowledge, (2) Conceptualization and Measurement, (3) Methods and Design, (4) Strategy, Mechanisms, Determinant, Outcome Linkages, and (5) Theory, Causality, and Context. See Table 1 for a complete list of identified challenges organized by cluster. These concept mapping results provided the basis for the research agenda. This paper describes how actions that could overcome those challenges were identified and presents the resulting research agenda.
The research agenda was developed by the Mechanisms Network of Expertise (MNoE). The MNoE is composed of over 40 invited implementation scientists who are diverse with respect to several dimensions (e.g., gender, race/ethnicity, stage of career, focus on priority populations, research settings), but who are predominantly United States (US)-based (4 scholars are from outside the US); See Additional File 1. Expertise ranged across various aspects of implementation mechanism research including strategy development, measurement, design, theory, and practice. We gathered collective wisdom and engaged in reciprocal learning with these experts through immersive, multi-day “Deep Dive” meetings.
A US-based Footnote 1 subset of the MNoE ( N = 23) met in person for a 3-day Deep Dive to address two goals: 1) expand upon the challenges derived from the previously completed concept mapping, and 2) generate ideas or actions (hereafter just referred to as actions ) organized by priority areas, which constitute the research agenda, to advance the study of implementation mechanisms. To this end, attendees received handouts with the cluster solution from concept mapping and the list of statements associated with each cluster (Table 1 ). These two goals were pursued through four, 120-min sessions comprised of a 75-min small-group activity followed by a 45-min large-group activity. (Table 2 ) Group activities were structured using evidence-informed, semi-structured group problem solving activities—called “scripts”— derived from operations, consulting, and systems science methods [ 33 , 34 ] (Table 3 ). Scripts include discussion prompts, guidelines about how time is spent (e.g., in small versus large groups), roles to be assumed by individuals (e.g., timekeeper), and session goals (e.g., brainstorming actions for a given cluster). A core planning team ( n = 5) selected scripts from a repository and tailored them to Deep Dive objectives (e.g., identifying actions for addressing challenges to studying implementation mechanisms) across the sessions. Tailoring of scripts included adjusting the time allocated for each script, the examples used, and the wording of the prompts. The planning team assigned small group membership beforehand to ensure diverse groups regarding career stage and content or methodological expertise. The small group composition changed by session to stimulate creative conversation and cross-pollinate ideas by hearing new perspectives.
A tailored Nominal Group Technique process was followed for the first three sessions. Instead of first brainstorming individually, as in the traditional Nominal Group Technique [ 35 ], small groups first generated action ideas before sharing, discussing, and voting on priority ideas with the large group. Attendees did the following in small groups before converging as a large group (Table 3 ): 1) Assign group roles , including scribe (to record discussion), reporter for large group, and timekeeper. Individuals could assume more than one role. 2) Brainstorm actions for inclusion in the research agenda and address the challenges from the five super-clusters (the planning team assigned which super-clusters were discussed during each of these sessions). Actions could include methods, tools, activities, meetings, research products, research foci, disciplines, or people/perspectives to be engaged. 3) Prioritize two actions for full group discussion: based on consensus, one idea favored by the group and one idea that was complex, underdeveloped, or surprising to work through were selected. Small groups were encouraged to spend approximately 60 min brainstorming and 15 min prioritizing actions. Each prioritized action was submitted on paper for sharing in the large group session. Groups were encouraged to write as many actions as they could generate. Scribes’ notes were later analyzed (see below). All actions generated, not just those prioritized for deeper discussion, were considered in developing the research agenda.
During the first three large-group discussions, each group’s reporter briefly described how their two prioritized actions would advance the study of implementation mechanisms. Each group had 5 min to share and take questions. Simultaneous with sharing out, facilitators collected the papers and grouped similar actions on a wall visible to all. After all small groups shared, the facilitator summarized the action themes. The large group collectively reflected on these and used the remaining time to further develop prioritized actions.
The fourth (final) session synthesized and expanded actions brought forth in the preceding sessions. Each attendee used five votes to indicate preferred actions (or group of actions) [ 36 ]. The highest-voted actions ( n = 6) were prioritized for this session. Attendees self-selected into small groups based on which prioritized actions they wanted to discuss. During the final large-group session, each group shared how the actions had evolved or whether new actions emerged. One facilitator synthesized actions and asked clarifying questions, while another captured actions and priorities on large pieces of paper for the large group to see and discuss.
To populate the research agenda, a subgroup ( N = 6) of attendees extracted data from notes taken across the Deep Dive. Please refer to Table 4 for terms (and definitions) used to organize the research agenda. All unique actions were extracted from each session note that covered at least one super-cluster. Each session note was assigned a primary and a secondary coder. Coders met monthly as a group to refine the process and discuss emergent content. The primary coder extracted action data and refined the language to represent a succinct, coherent action based on: (1) the content of the notes, (2) the context of the larger discussion in the notes, (3) discussion with colleagues (during and/or after the Deep Dive), and (4) consideration of the broader literature. The secondary coder checked data accuracy, separated or grouped actions to ensure each reflected a singular activity, and refined the action verbiage. Coders were encouraged to interpret data to generate additional actions. Coders then worked across sessions to clarify and condense the list of actions, reduce redundancy, and organize actions into priority areas (“priorities”). Given the number of actions identified for each priority area, it became clear that organizing actions within priorities by goals could offer a useful, high-level summary. Coders reviewed all actions in a priority and articulated 2–4 goals that could be achievable by a subset of actions. Each action was then labeled with its corresponding goal. Lastly, the first author synthesized all actions and associated goals within each priority, solicited input from the full authorship team, and refined the data to yield the final research agenda.
Table 5 presents the refined list of the MNoE-generated actions, organized by priorities and goals, into a research agenda to advance the study of implementation mechanisms. Although not required per our method, priorities reflected all five super-clusters from the concept mapping solution. In addition, priorities emerged specific to Engagement (of policy and practice communities, as well as funders) and Growing the Field in terms of capacity (number of knowledgeable researchers) and skills specific to studying mechanisms. The MNoE generated 150 unique actions across 10 priority areas (range: 11–19 actions per area). These actions included a mix of discrete activities, projects, or products, as well as ways to shift how research is conducted to center implementation mechanisms. Wherever possible, citations are included in the table to offer exemplars that represent the intention behind the possibility.
Here, we briefly describe each priority and the types of associated actions. Table 5 presents additional details—including goals that each priority area might achieve. The first set of actions are directly aligned with the concept mapping solution super-clusters.
Accumulate Knowledge within and Across Disciplines includes 19 actions that feature specific systematic reviews and meta-analyses, for example, and research questions that would drive this type of evidence synthesis (e.g., determine whether mechanisms are universal, or if variation across contexts is observed).
Prioritize Mechanism Research and Incorporate Other Knowledge includes 11 actions that would bring together transdisciplinary teams across fields where mechanisms are likely a prominent area of research, such as psychology and epidemiology.
Overcome Design Challenges and Innovate Methods includes 18 actions where new methods are needed (e.g., modeling time in quantitative assessment to isolate specific mechanisms) and identifies underused methods offering specific value (e.g., comparative case studies to generate hypotheses about complex mechanistic pathways).
Improve Measurement includes 13 actions, such as pragmatic approaches for objective data collection and those that capture lived experiences—an essential measurement component to understand when disparities might be addressed or exacerbated through implementation research and practice.
Provide Guidance for Specifying Mechanisms includes 15 actions reflecting mostly tools/aids to improve researchers’ approach to examining mechanisms (e.g., a list of questions and criteria for articulating mechanisms).
Increase Focus on Theorizing includes 12 ways to capitalize on developing, incorporating, and refining theory into mechanistic research to better characterize mechanisms (e.g., make theory explicit in the strategy design phase).
The emergent actions related to Engagement and Growing the Field provide further priorities for action.
Engaging the Policy and Practice Community includes 12 actions or methods for understanding the perspective of these potential partners (e.g., cognitive walkthroughs, plain language, Implementation Mapping [ 64 , 65 ]) and questions about when to include whom and how (e.g., compare “ground up” elucidation of mechanisms to the “top down” or theory-driven approach).
Engaging Funders and the Need for New Funding includes 17 actions to garner interest and expertise (e.g., mock study sections) and inspire novel use of new grant mechanisms (e.g., administrative supplements, trainee funding mechanisms).
Build Capacity includes 17 actions to offer clarification/guidance (e.g., how to understand conceptual/theoretical misalignment between strategies, mechanisms, and outcomes) and avenues to build the field’s capacity (e.g., postdoctoral training grants).
Emphasize Dissemination includes 17 actions like specific manuscript ideas, ways to engage journals to support mechanism-focused manuscripts, forums to host this dialogue, and other methods for generating broader interest beyond academia. Such methods are intended to foster iterative and collaborative advancements in mechanism research across interdisciplinary groups.
This paper articulates opportunities to advance the study of implementation mechanisms in a research agenda organized by priorities for the field and specific actions to advance those priorities. Actions range from those that can be acted upon now by way of shifting the research paradigm (e.g., always articulate mechanisms when designing implementation strategies) to those that may need targeted funding and specialized knowledge/expertise (e.g., conduct sufficiently powered, multilevel tests of mechanisms with multidisciplinary input). What follows is a discussion of each priority area by highlighting actions (represented by A# corresponding to Table 5 ) or exemplars organized by goals (represented by G# in Table 5 ). These actions were articulated by the MNoE (a group of experts) as ways to address challenges identified in their prior concept mapping work.
With 100 + discrete implementation strategies and behavior change techniques from which to choose [ 12 , 13 , 14 , 15 , 16 ], balanced with evidence that rarely will a single strategy suffice in realizing sustained and robust change [ 68 , 69 ], accumulating basic knowledge about how strategies work is crucial. Although the MNoE acknowledged that a starting place could be to curate a list of implementation mechanisms, they also emphasized that there is a risk in overreliance on static lists and frameworks at the expense of theorizing or broader critical thinking [ 70 , 71 ] (A22.4), particularly where evidence for strategy functioning and causal processes is thin. To this end, the MNoE prioritized knowledge synthesis across completed studies (G1) and coordination of future studies (G2). Specifically, the MNoE prioritized accumulating knowledge to yield practical information such as: (i) which strategies are needed for specific types of interventions across most contexts (e.g., ‘practice & feedback’ needed for evidence-based psychotherapy implementation) (A1.8); (ii) which strategies hold promise in addressing certain barriers across diverse operationalizations [ 72 , 73 ] (A1.12) (e.g., educational training to address knowledge deficits); (iii) whether strategy-mechanism pairings are universal, or if and how pathways vary across contexts (e.g., service system, level of actor, community, culture) or strategy operationalization (i.e., form versus function [ 74 , 75 ]) (A1.7).
Not only are individual studies needed to test strategy pathways to yield this information (P1.5), which could be done in practical and efficient simulation studies (A2.5), but evidence syntheses are needed to curate this practical information (A1.1, 1.2, 1.3, 1.6, 1.8, 1.9, 1.10, 1.11, 1.12). These possible actions are ripe for those interested in secondary data analysis. Alternatively, meta-laboratories (meta-labs) [ 76 ] offer an approach to testing implementation strategies at scale with the possibility of pooling samples for mediation analyses (A2.3). Meta-labs can harness practical implementation efforts in health systems, for example, where different operationalizations of commonly deployed strategies can be examined using harmonized implementation process, service, and patient-level health outcomes contained in electronic medical records. Grimshaw and colleagues are pioneering the meta-lab by convening subject matter experts to accumulate evidence about audit and feedback [ 76 , 77 ]. It is unclear whether existing grant funding mechanisms can accommodate the infrastructure necessary for multi-study, global coordination, and data sharing in such efforts (A19.5).
To accumulate knowledge efficiently, the MNoE recommended a mechanism-focused study repository for sharing information, evidence, and methods (A2.2). A repository could be used to share measures of mechanisms for cross-study testing and comparison; report impact/effect of strategies with how and why data; and provide diverse exemplar studies, especially those that engage community/practice partners. Web-based resources for implementation science are mounting (e.g., measure repositories [ 78 , 79 ]), but to our knowledge, few living repositories or systematic reviews exist perhaps because they are a relatively novel methodology [ 80 ] expedited into action by the COVID-19 pandemic [ 81 , 82 ].
Finally, the MNoE prioritized drawing on other disciplines (G3) and collaborating with experts from other disciplinary backgrounds (G4), such as scholars who study mechanisms using a multilevel perspective (A3.1). There are dozens of fields in which one entity helps another do something differently (A3.2) (e.g., governance, natural resources, education, health promotion) to integrate evidence-based interventions and strive for equity. The MNoE cautioned against our field ‘recreating the methodological wheel,’ and underscored the utility of multidisciplinary workgroups (A4.1) and workshops (A4.2). The MNoE prioritized actions to make implementation science more accessible (e.g., 1-page documents such as an SBAR: Situation, Background, Assessment, Recommendation [ 83 ] that conveys the importance of studying implementation mechanisms) to support bidirectional learning and springboard convenings. A recent commentary expressed concern that our field borrows superficially from others when interdisciplinarity or trans-disciplinarity is warranted [ 20 ]. Funders have recently made deep interdisciplinary collaboration a priority through opportunities such as the National Cancer Institute Implementation Science Centers [ 84 ] in which their Research Program Cores bring together numerous disciplines in a Methods Unit to test, refine, and disseminate new approaches [ 85 ] throughout 5-year awards [ 86 ].
The MNoE asserted the importance of overcoming design challenges (e.g., multiple multi-level mechanisms) and innovating methods (e.g., to address the time-varying nature of mechanism activation) specific to the study of mechanisms. They prioritized activities to guide selection and refinement of study designs (G5), enable measurement of pertinent and feasible data (G6), and leverage strengths of different research methods (G7) to enable establishing strategy mechanisms. For instance, much like the overview of designs that emerged from an NIH working session in 2014 [ 52 ], guidance is needed regarding when to use different designs and methods specifically for the purpose of establishing implementation mechanisms (A5.1). The MNoE suggested mechanism activation may offer an earlier signal along the causal pathway to indicate whether a strategy is working as hypothesized (A6.3). Designing trials for early signal testing demands methodological guidance regarding what constitutes reasonable levels of evidence (go/no-go indicators) (A6.4), how to time mechanism measurement or measure intermediate outcomes (A6.5), and how to pivot if the signal is not detected, particularly in a grant-funded study where adapting/changing the implementation strategy (i.e., independent variable) could be deemed a protocol deviation [ 58 ]. Fortunately, methods experts are beginning to apply adaptive trial designs that directly answer this call [ 87 ]. The MNoE also acknowledged the power of qualitative methods [ 88 , 89 , 90 , 91 ] to inform theory development and surface candidate mechanisms (A7.1) and to offer formative evidence for why a strategy did not work as intended (A7.2). The MNoE highlighted that qualitative methods provide richness, unique insights, and critical perspectives of those with lived experience [ 57 , 89 , 90 , 91 , 92 ]. Engagement with diverse partners will yield more specific, contextualized, and experientially-informed hypotheses of how strategies are working (A7.3) that may be more acceptable and appropriate for a given context and innovation compared to researcher-derived hypotheses. For example, a secondary analysis of a large implementation trial of measurement-based care revealed no significant mediators from the quantitative data but identified important candidate mechanisms from qualitative analyses [ 93 ].
In general, great strides have been made to enhance the quality, access, and utility of measurement in implementation science through systematic reviews, guidance documents, and web-based repositories [ 78 , 79 , 94 ]. The MNoE prioritized actions specific to studying mechanisms to develop grounded and generalizable measures (G8), recommend best practices regarding measurement (G9), and clarify ongoing measurement challenges (G10). The MNoE articulated the need to deploy measurement methods that allow for multiple, real-time assessments to detect changes that unfold over time (A8.2), as mechanisms are hypothesized to be activated at varying rates by population and context. The MNoE elevated the possible use of passive data collection approaches for continuous monitoring of mechanisms (A9.2), ecological momentary assessment (EMA), or lower-burden, near-continuous assessments to track changes in mechanisms and determinants (A9.3). As an example, EMA was used to identify predictors of noncompliance of event-based reporting of tobacco use [ 95 ]. Although this example is implementation-adjacent, it reveals how underused approaches like EMA can overcome measurement challenges critical to studying mechanisms such as timing (e.g., multiple, repeated measures) and self-report (e.g., bias, memory).
The MNoE was initially organized to include a subset of scholars who focused on understanding the linkages between strategies, mechanisms, determinants, and outcomes [ 30 ]. Recognizing that strategies are too often disconnected from determinants [ 96 , 97 ] and overpromising outcomes [ 69 ], the MNoE articulated the role of mechanisms in the causal pathway in terms of how a strategy exerts its effects on target outcomes by overcoming barriers [ 98 ]. The MNoE prioritized defining mechanisms as distinct from determinants and establishing reporting standards for mechanisms research (G11) to support deployment of cross-context and multilevel approaches (G12). The MNoE remarked on this as critical “foundational work” for scientific and practical progress to be made. For instance, the MNoE encouraged consideration of which strategies (from compilations such as Expert Recommendations for Implementing Change (ERIC) [ 13 ] and Effective Practice and Organization of Care (EPOC) [ 99 ]) have evidence of activating specific mechanisms to resolve particular barriers and achieve specific outcomes. Such foundational knowledge of discrete strategies would be instrumental in designing a practical implementation plan, but no synthesis or repository exists to our knowledge (A11.1), although a 2016 review does offer preliminary evidence on a subset of strategy-mediator pairings [ 21 ]. One activity to contribute this knowledge may be the “salvage strategy” [ 100 , 101 ] in which journals or conferences feature implementation failures and invite exploration of mechanism activation or lack thereof [ 17 ] (A12.9). The MNoE also prioritized using theory to guide articulation of putative mechanisms (A12.3) and the examination of mechanisms across diverse contexts to explore how mechanisms might be activated differently or over a different timeframe across contexts, populations, or interventions (A12.4). Moreover, the MNoE acknowledged the potential to hyperfocus on intrapersonal mechanisms of behavior change, which has a mounting evidence base [ 16 , 72 , 102 ]. To complement this individually focused work, the MNoE explicitly prioritized exploring mechanisms at aggregate levels of analysis that are less studied (e.g., community or policy levels), but where structures should be targeted to improve (A12.6) equitable outcomes [ 32 , 50 , 52 , 58 , 75 , 87 , 103 , 104 ].
Because implementation science is a convergence of many disciplines, there are relevant classic theories (e.g., from social psychology, business, economics, education, anthropology) that articulate mechanisms [ 105 ]. Most utilized are frameworks, from which the theoretical underpinnings that depict relationships among constructs and enable prediction through propositions are absent, leaving a list of measurable factors organized by conceptual coherence, as in the case of the Theoretical Domains Framework [ 106 ] and the updated Consolidated Framework for Implementation Research [ 107 ]. Kislov et al. [ 108 ] wrote about the importance of theorizing as a process that could enable implementation scientists to bidirectionally inform and learn from empirical data to test and advance generalizable knowledge and theory working at the mid-range level to develop and refine grand theories. More recently, Meza and colleagues [ 109 ] attempted to make theorizing more accessible to researchers, and although they use theorizing about determinants as their use case, they name mechanisms as a critical component of causal chains that explain how an implementation initiative is successful. Toward this goal, the MNoE prioritized activities that would incorporate theory (G13) through examples and guidance (G14). Actions included differentiating causal theory from program theory (A13.1), modifying implementation science “grand” theories to better represent mechanisms (A13.2), and making the notion of timing more explicit in the theory of change (A13.6). Consistent with the above-mentioned calls to prioritize theory, the MNoE prioritized guidance to choose relevant theories for study planning (A14.4), to fully integrate theory in an implementation study of mechanisms (A14.3), and to clarify how theory is used to articulate mechanisms (A14.2).
Beyond the five priority clusters initially identified in the concept mapping of challenges stymying the field, two new priority clusters of actions emerged through MNoE discussions: Engagement and Growing the Field . These priorities reflect critical areas of work to advance the study of implementation mechanisms. The Engagement cluster represents actions that, if prioritized early, would amplify the impact of actions in other clusters. Growing the Field actions are foundational and/or underpin the work of the other clusters, which might not be possible otherwise.
In terms of Engagement, the MNoE thought it critical to engage the policy and practice community, as well as funders of implementation science. The MNoE emphasized that the policy and practice communities are critical to establishing mechanisms, yet this area of science can feel obscure and pedantic to those communities. Funders were identified as a separate target for engagement because many of the prioritized actions do not fit neatly within traditional funding mechanisms.
The MNoE articulated priorities for engaging policy and practice partners in mechanism identification, validation, and testing (G15) and in using methods to obtain practice-based data and confirm theory (G16). The MNoE recommended plain-language mechanism definitions and de-jargonized questions for identifying mechanisms with community partners to help scientific teams learn from their perspectives (A15.1). Plain language was repeatedly emphasized because the term “mechanisms” itself may limit idea generation or perceptions of applicability as it tends to surface mechanical or biological underpinnings (A15.2, 15.3). The MNoE saw the policy and practice communities, broadly construed, as central to unearthing putative mechanisms and generated actions for facilitating their engagement, including motivating them to study mechanisms (A15.5), supporting them to collect and track data on mechanisms (A15.7), providing feedback (A15.8), and constructing causal pathways (A15.9). For instance, group model building presents a directed approach to engaging participants in articulating implementation mechanisms [ 110 ]. There are several more general frameworks, models, and approaches that can guide this kind of policy and practice community engagement, including community-based participatory research [ 111 ], community partnered participatory research [ 112 ], participatory action research [ 113 ], integrated knowledge translation [ 114 ], and user-centered design [ 67 , 115 ].
The MNoE articulated goals for engaging funders including emphasizing the study of mechanisms as a priority (G17), growing mechanism expertise (G18), and considering new funding models to support mechanism-focused research (G19). The MNoE suggested that it might be important to clarify, or confirm, that mechanism-focused research can lead to more parsimonious and efficient implementation approaches and reproducibility (A17.5). To this end, the MNoE surfaced the possibility of using scientific administrative supplements for mechanism data testing (A17.1) and making the study of mechanisms an explicit priority in funding opportunities (A17.2). To ensure mechanism evaluation fits within grant budget limits, the MNoE suggested deprioritizing patient and clinical outcomes when the intervention’s efficacy and/or effectiveness is robust and adaptation is minimal (A17.3).
The MNoE highlighted the importance of ensuring that grant reviewers are familiar with implementation mechanisms and can critically review grant proposals on these topics. To grow the capacity of reviewers (and the extramural community more broadly), the MNoE proposed specialized training for reviewers or the reviewer pipeline (A18.1), including conference workshops (A18.2) and mock study sections that center applications proposing implementation mechanisms research (A18.3). The MNoE envisioned a guideline document that would support assessing a study proposal’s plan to evaluate implementation mechanisms and scaffold learning key elements for mechanisms testing for those writing grant applications (A18.4).
Finally, the MNoE articulated several ideas for funding opportunities or suggested elements to emphasize within planned/existing funding opportunities. These included funding a coordinating center to harmonize measures, create the infrastructure for data collection, and integrate findings across numerous studies examining implementation strategy mechanisms (A19.5). The MNoE also wondered about the possibility of mechanism evaluation occurring during a follow-up (e.g., renewal) grant funding period, leveraging the longitudinal nature of the evaluation and the need to engage multiple partners (A19.4). In addition to large cross-study or longer initiatives, the MNoE suggested small and nimble grant opportunities that allow for discrete strategy testing and the need to pivot if the strategy “signal” is not detected (A19.1).
Throughout the Deep Dive, the MNoE called for multi-pronged efforts to grow the field. The MNoE recommended resources for evaluating mechanisms that could scaffold scientists’ efforts (G20) as well as more robust training that would help scholars grow new skillsets in the study of implementation mechanisms (G21). The MNoE prioritized guidance and resources regarding topics such as: how to test a strategy causal pathway (A20.1), how to choose the most appropriate outcome for a given mechanism (A20.2), how to isolate a mechanism from other factors in a causal pathway (A20.3, 20.4), how to disentangle the intervention from implementation strategies (A20.5), when to adapt the intervention versus modify the implementation strategy (A20.6), and when to change the strategy for the context versus change the context using the strategy based on our understanding of mechanisms (A20.8). With respect to this last topic, many scholars see contextual targets that, if changed, boast greater societal benefit (e.g., consideration of social determinants of health; addressing structural racism) as being inappropriate targets for implementation scientists, unless the evidence-based intervention itself is directed at those higher levels. Yet, implementing within existing structures can exacerbate inequities. These are critical questions, answers to which would have serious practical implications if, indeed, empirical guidance could be curated. Moreover, these questions are faced by numerous research teams, making the investment in generating such guidance even more valuable. These are the types of empirical evidence and associated resources that might come from larger investments to support the study of mechanisms, such as center grant awards, from which the scientific field and practice community stand to benefit.
The MNoE also generated several actions that were characterized as training-like approaches to build capacity. These included efforts like brief, recorded, didactic sessions regarding definitional issues surrounding mechanisms (A21.1), as well as more process-oriented training on, for example, how to specify causal chains (A21.2) and how to regularly reflect on why an implementation strategy is or is not working throughout the course of a study (A21.3). A team at the IMPACT Center has begun to produce videos aligned with these actions with funding from the US National Institute of Mental Health (P50MH126219) [ 116 ]. Acknowledging that videos might not be sufficient, this team has also offered in-person workshop training followed by office hours and one year of expert consultation around causal pathway diagramming [ 117 ]. Multipronged training and consultation will be critical for capacity building in new areas like the study of implementation mechanisms. Somewhat innovative actions were also shared, including a workgroup to support a series of training grants focused on the study of mechanisms (A21.4) and a data summit in which underutilized data from grants could be made available for secondary analysis paired with postdoctoral researchers using a shared mentoring model. The sentiment was that the expertise required to advance the study of mechanisms is sparse and approaches that extend the reach to new teams and data sources would be critical.
Although several of the above suggestions function as dissemination, the MNoE articulated four specific dissemination-related goals: produce focused manuscripts (G22); partner with journals to generate new paper types (G23); establish forums for dialog (G24); and generate broad interest using strategies that reach community partners (G25). The MNoE articulated numerous manuscripts that would be helpful such as Mechanisms Made Too Simple , inspired by Curran’s article [ 118 ]. They also imagined new paper types, such as one that centered on “learning from failure with wisdom,” which would essentially unpack implementation failures with a mechanistic lens. An example of such a commentary was written by researchers (not members of the original research team) regarding a recently published null trial that appears fruitful [ 17 ], and yet another approach is to ensure that implementers have opportunities to share “salvage strategies” that make the most out of opportunities to retain rigor when unexpected events threaten to derail studies that could shed light on mechanisms [ 100 , 101 ]. Finally, the MNoE underscored the importance of clarifying the “why” behind the study of mechanisms, particularly given the importance of learning from and supporting the policy and practice community. As they discussed dissemination, they surfaced a marketing problem in that not all would agree that the study of mechanisms could advance both science and practice, and some members believed this reductionist approach is misaligned with the very nature of implementation [ 20 ].
Importantly, the MNoE may not be representative of those who could contribute and/or stand to benefit from this work. Although we made efforts to engage researchers from outside the United States (US; e.g., open attendance during a SIRC breakout; international representation in MNoE paper writing groups), the inputs and outputs of this research agenda largely reflect a US perspective. Indeed, parallel and complementary work from scholars in the United Kingdom (UK) includes an ontology of mechanisms of action in behavior change interventions that begins to address several aspects of the Research Agenda [ 119 ]. We hope readers with different perspectives will consider building from the US and UK work, for example, writing a commentary to further the dialogue and/or pursuing research that advances some of the priorities discussed above. Moreover, although some of the MNoE identify more as clinically or practically oriented researchers, the MNoE did not include policy and practice community members. Thus, it is likely that new actions across the priority clusters would have emerged if different groups were engaged in the process of generating this content. Also, the focus of this research agenda is on implementation strategy mechanisms, or the processes through which strategies exert their effects to achieve outcomes [ 30 ]. This focus overlooks contextual mechanisms, such as those surfaced through realist reviews [ 120 ]. This focus is consistent with prior work by our team [ 19 ], but can limit the field’s ability to explain how and why implementation occurs.
Implementation science needs to further expand from what works to how and why certain strategies work, for whom, when, and in which contexts [ 121 ]. This research agenda outlines a roadmap of concrete actions for advancing the study of mechanisms. To carry out this research agenda, concerted and strategic effort is needed. There are numerous training forums that grow implementation research capacity [ 122 ]. We hope some will highlight the priorities articulated herein, bring together transdisciplinary experts with mechanism-specific expertise, and contribute to the study of implementation mechanisms.
MNoE members from other countries were invited, but unable to attend due to COVID restrictions.
Society for Implementation Research Collaboration
Mechanisms Network of Expertise
Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: Addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8:1–8.
Article Google Scholar
Gustafson P, Abdul Aziz Y, Lambert M, Bartholomew K, Rankin N, Fusheini A, et al. A scoping review of equity-focused implementation theories, models and frameworks in healthcare and their application in addressing ethnicity-related health inequities. Implement Sci. 2023;18:51.
Article PubMed PubMed Central Google Scholar
Pullmann MD, Dorsey S, Duong MT, Lyon AR, Muse I, Corbin CM, et al. Expect the unexpected: A qualitative study of the ripple effects of children’s mental health services implementation efforts. Implementation Research and Practice. 2022;3:26334895221120796.
Dadich A, Vaughan P, Boydell K. The unintended negative conse- quences of knowledge translation in healthcare: A systematic scoping review. Health Sociol Rev. 2023;32:75–93.
Woodward EN, Singh RS, Ndebele-Ngwenya P, Melgar Castillo A, Dickson KS, Kirchner JE. A more practical guide to incorporating health equity domains in implementation determinant frameworks. Implementation Science Communications. 2021;2:61.
Gaias LM, Arnold KT, Liu FF, Pullmann MD, Duong MT, Lyon AR. Adapting strategies to promote implementation reach and equity (ASPIRE) in school mental health services. Psychol Sch. 2022;59:2471–85.
Building health equity into implementation strategies and mechanisms. Available from: https://reporter.nih.gov/search/0xJrdiFN8EeuIS8eEAKZCA/project-details/10597777 . Cited 2023 Aug 9
Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: A systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:1–11.
Squires JE, Graham ID, Santos WJ, Hutchinson AM, Backman C, Bergström A, et al. The Implementation in Context (ICON) framework: a meta-framework of context domains, attributes and features in healthcare. Health Res Pol Syst. 2023;21:81.
Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, et al. Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): A systematic review. Implement Sci. 2017;12:2.
Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57.
Article PubMed Google Scholar
Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46:81–95.
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: Results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:1–14.
Kok G, Gottlieb NH, Peters GY, Mullen PD, Parcel GS, Ruiter RAC, et al. A taxonomy of behaviour change methods: An Intervention Mapping approach. Health Psychol Rev. 2016;10:297–312.
McHugh S, Presseau J, Luecking CT, Powell BJ. Examining the complementarity between the ERIC compilation of implementation strategies and the behaviour change technique taxonomy: a qualitative analysis. Implement Sci. 2022;17:56.
Marques M, Wright A, Johnston M, West R, Hastings J, Zhang L, et al. The Behaviour Change Technique Ontology: Transforming the Behaviour Change Technique Taxonomy v1. PsyArXiv; 2023. Available from: https://psyarxiv.com/vxypn/ . Cited 2023 Aug 8
Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19:e1003918.
Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:1–6.
Lewis CC, Boyd MR, Walsh-Bailey C, Lyon AR, Beidas R, Mittman B, et al. A systematic review of empirical studies examining mechanisms of implementation in health. Implement Sci. 2020;15:1–25.
Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17:55.
Williams NJ. Multilevel mechanisms of implementation strategies in mental health: Integrating theory, research, and practice. Admin Pol Mental Health Mental Health Serv Res. 2016;43:783–98.
Baker R, Comosso-Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015;4:1–118.
Google Scholar
Lewis CC, Marti CN, Scott K, Walker MR, Boyd M, Puspitasari A, et al. Standardized versus tailored implementation of measurement-based care for depression in community mental health clinics. PS. 2022;73:1094–101.
Lewis CC, Stanick C, Lyon A, Darnell D, Locke J, Puspitasari A, et al. Proceedings of the Fourth Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? part 1. Implement Sci. 2018;13(Suppl 2):1–5.
Guastaferro K, Collins LM. Optimization methods and implementation science: an opportunity for behavioral and biobehavioral interventions. Implement Res Pract. 2021;2:26334895211054364.
PubMed PubMed Central Google Scholar
McHugh SM, Riordan F, Curran GM, Lewis CC, Wolfenden L, Presseau J, et al. Conceptual tensions and practical trade-offs in tailoring implementation interventions. Frontiers in Health Services. 2022;2. Available from: https://www.frontiersin.org/articles/10.3389/frhs.2022.974095 . Cited 2022 Nov 19
Riordan F, Kerins C, Pallin N, Albers B, Clack L, Morrissey E, et al. Characterising processes and outcomes of tailoring implementation strategies in healthcare: a protocol for a scoping review. HRB Open Research; 2022. Available from: https://hrbopenresearch.org/articles/5-17 . Cited 2022 Nov 19
Landes SJ, Kerns SEU, Pilar MR, Walsh-Bailey C, Yu SH, Byeon YV, et al. Proceedings of the Fifth Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2019: where the rubber meets the road: the intersection of research, policy, and practice - part 1. Implement Sci. 2020;15(Suppl 3):1–5.
Vejnoska SF, Mettert K, Lewis CC. Mechanisms of implementation: An appraisal of causal pathways presented at the 5th biennial Society for Implementation Research Collaboration (SIRC) conference. Implement Res Pract. 2022;3:26334895221086270.
Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11:e053474.
Proctor EK, Luke D, Calhoun A, McMillen JC, McCrary S, Padek M. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:1–13.
Powell BJ. Challenges related to the study of implementation mechanisms: A concept mapping approach. San Diego, CA: Conference presentation at the Society for Implementation Research Collaboration Biennial Conference; 2022.
Vennix JA. Group model building. System. Dynamics. 1996;2:123–32.
Hovmand PS, Rouwette EAJA, Andersen DF, Richardson GP, and Kraus A. Scriptapedia 4.0.6. Cambridge: The 31st International Conference of the System Dynamics Society; 2013. Available from: https://proceedings.systemdynamics.org/2013/proceed/papers/P1405.pdf . Cited 2024 Sept 6.
Cantrill JA, Sibbald B, Buetow S. Delphi and nominal group techniques in health services research. Int J Pharm Pract. 1996;4:67–74.
American Society for Quality. What is multivoting? 2023. Available from: https://asq.org/quality-resources/multivoting . Cited 2023 Aug 8
Ertmer PA, Glazewski KD. Developing a research agenda: contributing new knowledge via intent and focus. J Comput High Educ. 2014;26:54–68.
Kane M, Trochim WMK. Concept mapping for planning and evaluation. Thousand Oaks, CA: Sage; 2007.
Book Google Scholar
Albers B, Metz A, Burke K, Bührmann L, Bartley L, Driessen P, et al. The mechanisms of implementation support - findings from a systematic integrative review. Res Soc Work Pract. 2022;32:259–80.
Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: An integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;6:2050312118773261.
Zamboni K, Baker U, Tyagi M, Schellenberg J, Hill Z, Hanson C. How and under what circumstances do quality improvement collaboratives lead to better outcomes? A systematic review. Implement Sci. 2020;15:27.
Kilbourne AM, Geng E, Eshun-Wilson I, Sweeney S, Shelley D, Cohen DJ, et al. How does facilitation in healthcare work? Using mechanism mapping to illuminate the black box of a meta-implementation strategy. Implement Sci Commun. 2023;4:53.
Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29:1534–41.
Akiba CF, Powell BJ, Pence BW, Nguyen MXB, Golin C, Go V. The case for prioritizing implementation strategy fidelity measurement: benefits and challenges. Transl Behav Med. 2022;12:335–42.
Akiba CF, Powell BJ, Pence BW, Muessig K, Golin CE, Go V. “We start where we are”: a qualitative study of barriers and pragmatic solutions to the assessment and reporting of implementation strategy fidelity. Implement Sci Commun. 2022;3:117.
Akiba CF, Go VF, Powell BJ, Muessig K, Golin C, Dussault JM, et al. Champion and audit and feedback strategy fidelity and their relationship to depression intervention fidelity: A mixed method study. SSM - Mental Health. 2023;3:100194.
Brookman-Frazee L, Stahmer AC. Effectiveness of a multi-level implementation strategy for ASD interventions: study protocol for two linked cluster randomized trials. Implement Sci. 2018;13:66.
Brookman-Frazee L, Chlebowski C, Suhrheinrich J, Finn N, Dickson KS, Aarons GA, et al. Characterizing shared and unique implementation influences in two community services systems for autism: applying the EPIS framework to two large-scale autism intervention community effectiveness trials. Adm Policy Ment Health. 2020;47:176–87.
Rothman AJ, Sheeran P. What is slowing us down? Six challenges to accelerating advances in health behavior change. Ann Behav Med. 2020;54:948–59.
Luke DA, Powell BJ, Paniagua-Avila A. Bridges and mechanisms: Integrating systems science thinking into implementation research. Ann Rev Public Health. 2024;45:7.
Swedberg R. Can you visualize theory? On the use of visual thinking in theory pictures, theorizing diagrams, and visual sketches. Sociol Theory. 2016;34:250–75.
Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.
Mercer SL, DeVinney BJ, Fine LJ, Green LW, Dougherty D. Study designs for effectiveness and translation research: Identifying trade-offs. Am J Prev Med. 2007;33:139–54.
Mazzucca S, Tabak RG, Pilar M, Ramsey AT, Baumann AA, Kryzer E, et al. Variation in research designs used to test the effectiveness of dissemination and implementation strategies: A review. Front Public Health. 2018;6:1–10.
Hwang S, Birken SA, Melvin CL, Rohweder CL, Smith JD. Designs and methods for implementation research: advancing the mission of the CTSA program. J Clin Transl Sci. 2020;4:159–67.
Institute of Medicine. Assessing the Use of Agent-Based Models for Tobacco Regulation. Washington, DC: The National Academies Press; 2015. Available from: https://doi.org/10.17226/19018
Bonell C, Warren E, Melendez-Torres G. Methodological reflections on using qualitative research to explore the causal mechanisms of complex health interventions. Evaluation. 2022;28:166–81.
Frank HE, Kemp J, Benito KG, Freeman JB. Precision implementation: an approach to mechanism testing in implementation research. Adm Policy Ment Health. 2022;49:1084–94.
Glaser J, Laudel G. The Discovery of Causal Mechanisms: Extractive Qualitative Content Analysis as a Tool for Process Tracing. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research. 2019;20. Available from: https://www.qualitative-research.net/index.php/fqs/article/view/3386 . Cited 2023 Aug 9
Stone AA, Schneider S, Smyth JM. Evaluation of pressing issues in ecological momentary assessment. Annu Rev Clin Psychol. 2023;19:107–31.
Kerwer M, Chasiotis A, Stricker J, Günther A, Rosman T. Straight From the scientist’s mouth—plain language summaries promote laypeople’s comprehension and knowledge acquisition when reading about individual research findings in psychology. Collabra: Psychology. 2021;7:18898.
Jones L, Wells K. Strategies for academic and clinician engagement in community-participatory partnered research. JAMA. 2007;297:407–10.
Article CAS PubMed Google Scholar
London RA, Glass RD, Chang E, Sabati S, Nojan S. “We Are About Life Changing Research”: Community Partner Perspectives on Community-Engaged Research Collaborations. Journal of Higher Education Outreach and Engagement. 2022;26. Available from: https://openjournals.libs.uga.edu/jheoe/article/view/2512 . Cited 2023 Aug 9
Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: Using intervention mapping to develop implementation strategies. Front Public Health. 2019;7:1–15.
Fernandez ME, Powell BJ, Ten Hoor GA. Editorial: Implementation Mapping for selecting, adapting and developing implementation strategies. Front Public Health. 2023;11:1–4.
Lyon AR, Coifman J, Cook H, McRee E, Liu FF, Ludwig K, et al. The Cognitive Walkthrough for Implementation Strategies (CWIS): a pragmatic method for assessing implementation strategy usability. Implement Sci Commun. 2021;2:78.
Graham AK, Wildes JE, Reddy M, Munson SA, Taylor CB, Mohr DC. User-centered design for technology-enabled services for eating disorders. Int J Eat Disord. 2019;52:1095–107.
Oxman AD, Thomson MA, Davis DA, Haynes B. No magic bullets: A systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J. 1995;153:1424–31.
Shojania KG, Grimshaw JM. Still no magic bullets: pursuing more rigorous research in quality improvement. Am J Med. 2004;116:778–80.
Walsh Bailey C, Wiltsey Stirman S, Helfrich CD, Moullin J, Nilsen P, Oladunni O, et al. The hazards of overreliance on theories, models, and frameworks and how the study of mechanisms can offer a solution. San Diego, CA: Society for Implementation Research Collaboration Conference; 2022.
Connors EH, Martin JK, Aarons GA, Barwick M, Bunger AC, Bustos TE, et al. Proceedings of the Sixth Conference of the Sixth Conference of the Society for Implementation Research Collaboration (SIRC) 2022: from implementation foundations to new frontiers. Implement Res Pract. 2023;4:S1–185.
Johnston M, Carey RN, Connell Bohlen LE, Johnston DW, Rothman AJ, de Bruin M, et al. Development of an online tool for linking behavior change techniques and mechanisms of action based on triangulation of findings from literature synthesis and expert consensus. Transl Behav Med. 2021;11:1049–65.
Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: Diversity in recommendations and future directions. Implement Sci. 2019;14:1–15.
Perez Jolles M, Lengnick-Hall R, Mittman BS. Core functions and forms of complex health interventions: A patient-centered medical home illustration. J Gen Intern Med. 2019;34:1032–8.
Lengnick-Hall R, Stadnick NA, Dickson KS, Moullin JC, Aarons GA. Forms and functions of bridging factors: specifying the dynamic links between outer and inner contexts during implementation and sustainment. Implement Sci. 2021;16:34.
Grimshaw JM, Ivers N, Linklater S, Foy R, Francis JJ, Gude WT, et al. Reinvigorating stagnant science: implementation laboratories and a meta-laboratory to efficiently advance the science of audit and feedback. BMJ Qual Saf. 2019;28:416–23.
Article CAS PubMed PubMed Central Google Scholar
The Audit & Feedback MetaLab. Available from: https://www.ohri.ca/auditfeedback/ . Cited 2023 Aug 9
Implementation Outcome Repository. Available from: https://implementationoutcomerepository.org . Cited 2023 Aug 9
Systematic Reviews of Methods to Measure Implementation Constructs. Available from: https://journals.sagepub.com/topic/collections-irp/irp-1-systematic_reviews_of_methods_to_measure_implementation_constructs/irp . Cited 2023 Aug 9
Living systematic reviews | Cochrane Community. Available from: https://community.cochrane.org/review-development/resources/living-systematic-reviews . Cited 2023 Aug 9
Maguire BJ, Guérin PJ. A living systematic review protocol for COVID-19 clinical trial registrations. Wellcome Open Res. 2020;5:60.
Negrini S, Ceravolo MG, Côté P, Arienti C. A systematic review that is ``rapid’’ and ``living’’: A specific answer to the COVID-19 pandemic. J Clin Epidemiol. 2021;138:194–8.
Compton J, Copeland K, Flanders S, Cassity C, Spetman M, Xiao Y, et al. Implementing SBAR across a large multihospital health system. Joint Comm J Qual Pat Safety. 2012;38:261–8.
Oh A, Vinson CA, Chambers DA. Future directions for implementation science at the National Cancer Institute: Implementation Science Centers in Cancer Control. Transl Behav Med. 2021;11:669–75.
Lewis CC, Hannon PA, Klasnja P, Baldwin L-M, Hawkes R, Blackmer J, et al. Optimizing Implementation in Cancer Control (OPTICC): protocol for an implementation science center. Implement Sci Commun. 2021;2:44.
Oh AY, Emmons KM, Brownson RC, Glasgow RE, Foley KL, Lewis CC, et al. Speeding implementation in cancer: The National Cancer Institute’s Implementation Science Centers in Cancer Control. J Natl Cancer Inst. 2023;115:131–8.
Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9:1–14.
Ramanadhan S, Revette AC, Lee RM, Aveling EL. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021;2:1–10.
National Cancer Institute. Qualitative methods in implementation science. Bethesda, Maryland; 2019. Available from: https://cancercontrol.cancer.gov/sites/default/files/2020-09/nci-dccps-implementationscience-whitepaper.pdf
Gertner AK, Franklin J, Roth I, Cruden GH, Haley AD, Finley EP, et al. A scoping review of the use of ethnographic approaches in implementation research and recommendations for reporting. Implement Res Pract. 2021;2:1–13.
Hamilton AB, Finley EP. Qualitative methods in implementation research: An introduction. Psychiatry Res. 2019;280:112516.
Lemire S, Kwako A, Nielsen SB, Christie CA, Donaldson SI, Leeuw FL. What is this thing called a mechanism? Findings from a review of realist evaluations. N Dir Eval. 2020;167:73–86.
Lewis CC, Boyd MR, Marti CN, Albright K. Mediators of measurement-based care implementation in community mental health settings: results from a mixed-methods evaluation. Implement Sci. 2022;17:71.
Rabin BA, Lewis CC, Norton WE, Neta G, Chambers D, Tobin JN, et al. Measurement resources for dissemination and implementation research in health. Implement Sci. 2016;11:1–9.
Kendall AD, Robinson CSH, Diviak KR, Hedeker D, Mermelstein RJ. Introducing a real-time method for identifying the predictors of noncompliance with event-based reporting of tobacco use in ecological momentary assessment. Ann Behav Med. 2023;57:399–408.
Bosch M, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: A multiple case analysis. J Eval Clin Pract. 2007;13:161–8.
Wensing M, Grol R. Knowledge translation in health: How implementation science could contribute more. BMC Med. 2019;17:1–6.
Lewis CC, Klasnja P, Lyon AR, Powell BJ, Lengnick-Hall R, Buchanan G, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3:114.
EPOC Taxonomy. Available from: https://epoc.cochrane.org/epoc-taxonomy . Cited 2023 Aug 9
Hoagwood KE, Chaffin M, Chamberlain P, Bickman L, Mittman B. Implementation salvage strategies: maximizing methodological flexibility in children's mental health research. In 4th Annual NIH conference on the Science of Dissemination and Implementation: Policy and Practice; 2011.
Dunbar J, Hernan A, Janus E, Davis-Lameloise N, Asproloupos D, O’Reilly S, et al. Implementation salvage experiences from the Melbourne diabetes prevention study. BMC Public Health. 2012;12:1–9.
Carey RN, Connell LE, Johnston M, Rothman AJ, de Bruin M, Kelly MP, et al. Behavior change techniques and their mechanisms of action: a synthesis of links described in published intervention literature. Ann Behav Med. 2019;53:693–707.
PubMed Google Scholar
Crable EL, Lengnick-Hall R, Stadnick NA, Moullin JC, Aarons GA. Where is “policy” in dissemination and implementation science? Recommendations to advance theories, models, and frameworks: EPIS as a case example. Implement Sci. 2022;17:80.
Purtle J, Moucheraud C, Yang LH, Shelley D. Four very basic ways to think about policy in implementation science. Implement Sci Commun. 2023;4:111.
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:1–13.
Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:1–17.
Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated consolidated framework for implementation research based on user feedback. Implement Sci. 2022;17:75.
Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14:1–8.
Meza RD, Moreland JC, Pullmann MD, Klasnja P, Lewis CC, Weiner BJ. Theorizing is for everybody: Advancing the process of theorizing in implementation science. Frontiers in Health Services. 2023;3. Available from: https://www.frontiersin.org/articles/10.3389/frhs.2023.1134931 . Cited 2023 Mar 19
Northridge ME, Metcalf SS. Enhancing implementation science by applying best principles of systems science. Health Res Pol Syst. 2016;14:1–8.
Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: The intersection of science and practice to improve health equity. Am J Public Health. 2010;100:S40–6.
Wells K, Jones L. “Research” in community-partnered. Participatory Res JAMA. 2009;302:320–1.
CAS Google Scholar
Baum F, MacDougall C, Smith D. Participatory action research. J Epidemiol Community Health. 2006;60:854.
Nguyen T, Graham ID, Mrklas KJ, Bowen S, Cargo M, Estabrooks CA, et al. How does integrated knowledge translation (IKT) compare to other collaborative research approaches to generating and translating knowledge? Learning from experts in the field. Health Res Pol Syst. 2020;18:35.
Article CAS Google Scholar
Lyon AR, Bruns EJ. User-centered redesign of evidence-based psychosocial interventions to enhance implementation: Hospitable soil or better seeds? JAMA Psychiat. 2019;76:3–4.
Kaiser Permanente Washington Health Research Institute. Causal Pathway Diagrams. 2022. Available from: https://vimeo.com/740549106
Klasnja P, Meza RD, Pullmann MP, Mettert KD, Hawkes R, Palazzo L, et al. 1354 Getting cozy with causality: A causal pathway diagramming method to 1355 enhance implementation precision. Implement Res Pract. 2024; 5:1–14.
Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. 2020;1:1–3.
Schenk PM, Wright AJ, West R, Hastings J, Lorencatto F, Moore C, et al. An ontology of mechanisms of action in behaviour change interventions. Wellcome Open Res. 2024;8:337.
Salter KL, Kothari A. Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review. Implement Sci. 2014;9:115.
Hamilton AB, Mittman BS. Implementation science in health care. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: Translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 385–400.
Viglione C, Stadnick NA, Birenbaum B, Fang O, Cakici JA, Aarons GA, et al. A systematic review of dissemination and implementation science capacity building programs around the globe. Implement Sci Commun. 2023;4:34.
Download references
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
In addition to federal funding, we are grateful to have received funding support from the Society for Implementation Research Collaboration, which helped make our Deep Dive meetings possible. Moreover, we would like to thank attendees at SIRC’s 2019 conference who provided input into the challenges associated with studying implementation mechanisms. Finally, we wish to thank each member of the MNoE, some of whom contributed to the development of the R13 grant proposal (Gregory Aarons, Rinad Beidas, Aaron Lyon, Brian Mittman, Byron Powell, Bryan Weiner, Nate Williams), and the rest of whom participated in one or more Deep Dive meetings, between event virtual sessions, or in the paper writing groups; see Additional File 1. Gracelyn Cruden began this work at her prior institution, Oregon Social Learning Center, and completed it at her current institution.
This research was supported by funding from the Agency for Healthcare Research and Quality (R13HS025632), National Institute of Mental Health (P50MH126219, R01MH111950, R01MH111981, K01MH128769, R25MH080916, K01MH113806, K01MH128761), and National Cancer Institute (P50CA244432 and R01CA262325, P50CA244690).
Authors and affiliations.
Kaiser Permanente Washington Health Research Institute, 1730 Minor Avenue, Suite 1600, Seattle, WA, 98101, USA
Cara C. Lewis
The Warren Alpert Medical School, Brown University, Box G-BH, Providence, RI, 02912, USA
Hannah E. Frank
Chestnut Health System, Lighthouse Institute – OR Group, 1255 Pearl St, Ste 101, Eugene, OR 97401, USA
Gracelyn Cruden
Center for Healthcare Organization and Implementation Research, VA Boston Healthcare System, 150 South Huntington Avenue, Boston, MA, 02130, USA
Department of Psychiatry, Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA
UC Davis MIND Institute, 2825 50Th St, Sacramento, CA, 95819, USA
Aubyn C. Stahmer
Department of Psychiatry and Behavioral Sciences, University of Washington, 1959 NE Pacific Street Box 356560, Seattle, WA, 98195-6560, USA
Aaron R. Lyon
Institute for Implementation Science in Health Care, University of Zurich, Zürich, Switzerland
Bianca Albers
Department of Psychiatry, University of California San Diego, 9500 Gilman Drive La Jolla California, San Diego, 92093, CA, USA
Gregory A. Aarons
Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, 625 N Michigan Avenue, Evanston, IL, 60661, USA
Rinad S. Beidas
Division of Health Services Research & Implementation Science, Department of Research & Evaluation, Kaiser Permanente Southern California, 100 S Los Robles Ave, Pasadena, CA, 91101, USA
Brian S. Mittman
Department of Global Health, School of Public Health, Box 357965, Seattle, WA, 98195, USA
Bryan J. Weiner
School of Social Work, Boise State University, Boise, ID, 83725, USA
Nate J. Williams
Center for Mental Health Services Research, Brown School, Washington University in St. Louis, St. Louis, MO, USA
Byron J. Powell
Center for Dissemination & Implementation, Institute for Public Health, Washington University in St. Louis, St. Louis, MO, USA
Division of Infectious Diseases, John T. Milliken Department of Medicine, School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
You can also search for this author in PubMed Google Scholar
CCL, HEF, BK, AS, GC, and BJP contributed to the conceptualization of the manuscript, engaged in the coding, and participated in data interpretation. CCL drafted the introduction, results, and discussion. HEF and GC drafted the method section. ARL and BA reviewed preliminary results and contributed to revisions to the results table. BJP and CCL worked the manuscript through several cycles of review by all coauthors. All authors (CCL, HEF, BK, AS, GC, BJP GAA, RSB, BSM, BJW, NJW, MF, SM, MP, LS, AW, CWB, SWS) reviewed, edited, and approved the final content of the manuscript.
Correspondence to Cara C. Lewis .
Ethics approval and consent to participate.
This study was reviewed and approved by Kaiser Permanente Washington Health Research Institute’s IRB and was deemed Not Human Subjects Research.
Competing interests.
Drs. Lewis and Weiner receive royalties from Springer Publishing. Dr. Beidas is principal at Implementation Science & Practice, LLC. She receives royalties from Oxford University Press, consulting fees from United Behavioral Health and OptumLabs, and serves on the advisory boards for Optum Behavioral Health, AIM Youth Mental Health Foundation, and the Klingenstein Third Generation Foundation outside of the submitted work. Dr. Aarons is a Co-Editor-in-Chief, Dr. Beidas is an Associate Editor, and Drs. Powell and Weiner are on the Editorial Board of Implementation Science , none of whom will play a role in the editorial process of this manuscript.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material 1., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Lewis, C.C., Frank, H.E., Cruden, G. et al. A research agenda to advance the study of implementation mechanisms. Implement Sci Commun 5 , 98 (2024). https://doi.org/10.1186/s43058-024-00633-5
Download citation
Received : 17 January 2024
Accepted : 30 August 2024
Published : 16 September 2024
DOI : https://doi.org/10.1186/s43058-024-00633-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 2662-2211
Demystifying higher education.
A research agenda plays a valuable role in helping design scholarly activities for graduate students and faculty. Simply put, a research agenda means identifying the areas you will research and the methodologies you will use to answer questions. You probably have heard from professors in graduate school and beyond that you can’t research everything so you need to pick what you can feasibly study. Moreover, a scattershot approach can keep you from focusing on important questions and pull you in a number of different directions. In today’s post, I will describe research agenda and why they can be of benefit for researchers.
Just as a meeting agenda provides the items to be discussed during a meeting, a research agenda provides clarity and a framework for making decisions regarding your research activities.
It can be tempting to jump on any research idea that comes along and seems interesting.
Rather, what you need is a lens through which you can consider new ideas and projects as they come along.
A clearly articulated research agenda provides boundaries for you to make decisions regarding your scholarly work.
New projects will undoubtedly be attractive at first, but they should be considered in light of your agenda as the first step in reviewing them.
Only once the new idea is in line with your agenda to you move on to consider if you have the time and desire to move forward.
In addition to serving as a useful guide, research agenda help others understand and view the research work that you do.
Research agendas are comprised of a strand (or possibly two or three related ones) of research that you explore. These may be topics or questions that your research seeks to explore.
Many people that I have know do not have a single line of inquiry that forms their research agenda.
Everyone has different and even related interests for their scholarship so you may not have a single, isolated line of research that you explore.
Even tenure committees (that value firm agendas) realize that tenure candidates may have two related concepts that they studied extensively in graduate school, had experience in working on in a laboratory, or were part of their dissertation.
As long as you can articulate each line of inquiry, the relationships between each line, and demonstrate your expertise in the two (or at most three) lines of inquiry that you are studying, most everyone in higher education will find this appropriate.
However, if your research appears to be a collection of random projects lacking a common thread between them, hiring and tenure committees will rightly question whether you have demonstrated expertise and developed a level of sophistication in your research.
For pre-tenure faculty, research agendas can be useful for helping you build up a reputation of expertise and work around a specific topic.
College and universities want to see that pre-tenure are establishing or have achieved a national reputation in the field of expertise.
A tightly focused research agenda helps to achieve prominence by focusing on a specific area.
If someone’s research bounces around among a variety of relatively disconnected projects, then it becomes difficult to establish and validate areas of expertise particularly to external reviewers.
Moreover, working on similar research studies creates significant efficiencies. For example, you do not need to learn a new body of research in order to write the literature review and you are already familiar with journals that publish on your topic.
Overall, if you maintain a sense of consistency with your topic, you can more easily and quickly publish your research.
If you are struggling with articulating your own research agendas, I recommend studying the careers of major researchers in your discipline.
To do this, get a copy of the vita of a significant and well-respected researcher.
Next, look at the years prior to when the established scholar received tenure.
You are looking for how their line of research progressed throughout their career. Research takes a while to build up knowledge and data to answer specific questions.
Over time, as methodologies advance and the knowledge base grows, you will probably see research questions morph and change.
When looking at a full professor with 25 years of research experience, many pre-tenure faculty fail to fully appreciate how research agendas evolve. These professors did not magically come out of graduate school with the focus and expertise they possess today.
Studying these other agendas can help you learn how research agendas evolve over time, which can help in creating your own research agendas.
Establishing a research agenda and sharing this with professors, mentors, and colleagues provides an important groundwork and foundation for your research activities and I highly suggest taking the time to think about and articulate your own agenda.
A research agenda to improve outcomes in patients with chronic obstructive pulmonary disease and cardiovascular disease: an official american thoracic society research statement.
Background: Individuals with chronic obstructive pulmonary disease (COPD) are often at risk for or have comorbid cardiovascular disease and are likely to die of cardiovascular-related causes.
Objectives: To prioritize a list of research topics related to the diagnosis and management of patients with COPD and comorbid cardiovascular diseases (heart failure, atherosclerotic vascular disease, and atrial fibrillation) by summarizing existing evidence and using consensus-based methods.
Methods: A literature search was performed. References were reviewed by committee co-chairs. An international, multidisciplinary committee, including a patient advocate, met virtually to review evidence and identify research topics. A modified Delphi approach was used to prioritize topics in real time on the basis of their potential for advancing the field.
Results: Gaps spanned the translational science spectrum from basic science to implementation: 1 ) disease mechanisms; 2 ) epidemiology; 3 ) subphenotyping; 4 ) diagnosis and management; 5 ) clinical trials; 6 ) care delivery; 7 ) medication access, adherence, and side effects; 8 ) risk factor mitigation; 9 ) cardiac and pulmonary rehabilitation; and 10 ) health equity. Seventeen experts participated, and quorum was achieved for all votes (>80%). Of 17 topics, ≥70% agreement was achieved for 12 topics after two rounds of voting. The range of summative Likert scores was −15 to 25. The highest priority was “Conduct pragmatic clinical trials with patient-centered outcomes that collect both pulmonary and cardiac data elements.” Health equity was identified as an important topic that should be embedded within all research.
Conclusions: We propose a prioritized research agenda with the purpose of stimulating high-impact research that will hopefully improve outcomes among people with COPD and cardiovascular disease.
Introduction
Committee Composition
Literature Search and Review of Existing Evidence
Meetings and Modified Delphi Rounds
Document Development
Conclusions
• | The research gaps spanned the following 10 domains across the translational science spectrum from basic science to implementation research: ) mechanisms of disease; ) epidemiology; ) subphenotyping; ) diagnosis and management; ) clinical trials; ) care delivery; ) medication access, adherence, and side effects; ) risk factor mitigation; ) cardiac and pulmonary rehabilitation; and ) health equity. | ||||
• | A modified Delphi approach was used to prioritize the research topics in real time on the basis of their potential to advance the field and ultimately improve the lives of patients with COPD and comorbid cardiovascular disease. The topic that was voted to be top priority was “Conduct pragmatic clinical trials with patient-centered outcomes that collect both pulmonary and cardiac data elements,” such as real-world effectiveness trials of the polypuff or polypill in the COPD patient population with concurrent cardiovascular disease. In addition, health equity was emphasized by the panel as cross-cutting all the domains and an important gap that should be embedded within all research proposed by the panel. |
This research statement sets forth a prioritized research agenda that is based on expert opinion with the purpose of stimulating high-impact research for the optimal management of COPD and cardiovascular disease.
The burden of COPD is significant, affecting 479 million individuals worldwide in 2020 ( 1 ). The global burden of COPD is projected to increase by 23% between 2020 and 2050 ( 1 ). Cardiovascular disease is a leading cause of death worldwide ( 2 ). Comorbid cardiovascular conditions are common in patients with COPD ( 3 – 9 ). Compared with the general public, patients with COPD are ∼2.5 times more likely to have cardiovascular disease ( 7 ). Across cohorts, the prevalence of heart failure (7–42%), ischemic cardiovascular disease (2–18%), and arrhythmia (3–21%) is consistently high in people with COPD ( 3 – 9 ). Approximately 35% of deaths among patients with COPD are attributed to cardiovascular events ( 10 – 12 ), irrespective of airflow obstruction ( 13 ), which means that optimally treating cardiovascular disease in patients with COPD could have significant benefit at the population level.
1. | Both share smoking as a significant risk factor ( ). Smoking activates the same underlying inflammatory pathways (TNF-α, IL-6, CRP, etc.) and aging pathways (telomere shortening, cellular senescence) in COPD and cardiovascular disease ( , ). Smoking leads to acute inflammation, oxidative stress, protein imbalance, elastin degradation, hypoxia or hypercapnia, endothelial dysfunction, thrombogenicity, atherosclerosis, and arterial stiffness ( , , ), resulting in direct damage to both lung and heart tissue. | ||||
2. | The two organs are interconnected anatomically and physiologically, as the pulmonary vasculature is situated between the ventricles, resulting in closely tied structure and function ( ). It is well established that hypoxemia induces vasoconstriction in the arteries in the lung, leading to chronic pulmonary hypertension (type 3) and right heart failure. As the right heart chamber and muscle enlarge, there is ventricular interdependence, leading to the obstruction of left ventricular filling and poor cardiac output. This causes pulmonary vascular congestion and pulmonary congestion, which worsen respiratory failure. This interconnection between the lung and heart is cyclical and reciprocal. A more recent example is preliminary evidence showing that certain cardiovascular parameters (oxygen pulse and pulse pressure) improve after lung volume reduction surgery in emphysema ( ). | ||||
3. | Exacerbations in one organ system can perturb the other organ system. The risk of cardiovascular events dramatically increases in the 30 days after a COPD exacerbation and persists up to one year ( – ), which suggests that systemic inflammation caused by one disease process (COPD) directly affects other organs, such as the heart. | ||||
4. | Inactivity due to fatigue or hypoxemia from COPD can increase the risk of coronary artery disease. | ||||
5. | Medications to treat one condition can negatively affect the other organ system ( , ). For example, on the pulmonary side, treating patients with COPD exacerbation with azithromycin is associated with a small increase in the risk of cardiovascular death compared with no antibiotics or amoxicillin, which was most pronounced for patients at highest risk of cardiovascular events ( ). | ||||
6. | The presence of a comorbidity might alter clinicians’ risk:benefit ratio to provide evidence-based care for the first condition. Patients with COPD and heart failure might not be given β-blockers, although it is an evidence-based indication for a β-blocker, out of concern for bronchoconstriction, even though the absolute risk with cardioselective (B1) β-blockers is inconsequential ( – ). At the population level, this practice pattern of not receiving guideline-concordant care for heart failure likely contributes to excess deaths and represents a translation gap. | ||||
7. | It can be diagnostically challenging to differentiate COPD from cardiovascular disease, because they present with similar symptoms, such as shortness of breath and fatigue, in both the acute and chronic states ( , , ). Without a thorough review of systems, physical examination, laboratory workup (brain natriuretic peptide and differential diagnosis), patients with COPD with worsening shortness of breath might be treated with escalating bronchodilator therapy, when they are in fact developing heart failure and/or arrhythmias. Similarly, patients with COPD who present to acute care with shortness of breath and wheezing may be treated with nebulizers and antibiotics because the suspicion for COPD exacerbation is so high, whereas pulmonary vascular congestion might in fact be the driving etiology. |
These complexities have limited our ability to affect the disease course of COPD. Risk factors for cardiovascular disease are well known and modifiable (cholesterol, blood pressure control, etc.), and mortality from ischemic heart disease has trended downward over time. Meanwhile, mortality from COPD has not ( 39 ). Trials to treat patients with COPD using new combinations of inhalers have not decreased mortality, which could be because patients are not being treated optimally for all of their comorbid conditions. Understanding the complex interplay between the heart and lungs could open new avenues for population health management and new therapeutics ( 40 ). After performing a literature review, we used consensus-based methods to prioritize research topics related to the diagnosis and management of common comorbid cardiovascular conditions that exist in patients with COPD. For the purposes of this workshop, we defined “cardiovascular disease” as heart failure, atherosclerotic cardiovascular disease, and atrial fibrillation, using the clinical framework of pump, ischemia, and rhythm. This research statement is a call to action with the goal of expediting research that will have the greatest impact on patients with COPD and cardiovascular disease.
This project was approved by the American Thoracic Society (ATS) Program Review Subcommittee. The co-chairs (L.C.M., M.D., V.G.P.) convened a multidisciplinary, international committee of members representing internal medicine, pulmonology, cardiology, geriatrics, nursing, pharmacy, drug development, quality improvement and policy, learning health systems, patient experience, and patient advocacy. Members represented urban and rural healthcare settings, academic and nonacademic institutions, and professional societies. Before confirming the final roster, potential conflicts of interest were disclosed and managed per the policies and procedures of the ATS.
The lead co-chair (L.C.M.) performed a literature search with help from a Kaiser Permanente librarian. The search methodology is described in Figure E1 in the data supplement. Using Medical Subject Headings “Cardiovascular Diseases/therapy”(MAJR) AND “Pulmonary Disease, Chronic Obstructive/therapy”(MAJR) together with filters for adults ≥18 years of age and published in the past 5 years, separate queries were done for articles related to 1 ) diagnosis, 2 ) management, and 3 ) rehabilitation. Articles were reviewed to inform the agenda for Day 1.
Two separate meetings were held virtually (September 5, 2023, and September 27, 2023). The first day was a 6-hour session comprising six presentations and four discussions that were led by experts in the field. Presentations focused on existing literature and identifying existing gaps related to three cardiovascular conditions common in COPD (heart failure; atherosclerotic cardiovascular disease, which includes vessels in the heart, brain, and periphery; and atrial fibrillation). These three conditions were chosen using the clinical framework of “cardiovascular pump, ischemia, rhythm.” Content and discussion spanned the outpatient to inpatient care spectrum and addressed the complex relationships that exist between COPD and cardiovascular disease. Discussion of drugs were done at the class level. A patient representative with COPD and cardiovascular disease (C.G.) participated and provided the patient perspective. She stressed the importance of developing new technologies (drugs and devices) to facilitate the care and day-to-day management of COPD, streamline the care of patients with COPD who have multimorbidity to prevent siloed management among specialists, limit polypharmacy, decrease cost for patients, and improve adherence to guideline-based therapies. Twenty-two experts attended the first session. The audio file was transcribed using the TranscribeMe service and loaded into a word cloud that examined the frequency of words and their proximity to one another. The co-chairs generated a list of 10 domains of gaps from Day 1’s discussion that mapped onto the framework for the translational science spectrum, courtesy of the National Heart, Lung, and Blood Institute ( 41 ).
The second day was a 2-hour session to review the proposed list of research topics that were generated by the co-chairs and to perform a modified Delphi process in real time to prioritize the list ( 42 , 43 ). Eighteen experts attended the second session. Seventeen of the 18 experts voted in all polls; the remaining person (co-chair V.G.P.) facilitated the Delphi rounds and was reserved to break a tie (if needed). Participants were given an opportunity to fully read the list of research topics before voting to prevent bias based on the order in which topics were listed. For each topic, participants were asked whether the topic should be considered a top priority, which was defined as holding significant potential for advancing the field and ultimately improving the lives of patients with COPD and cardiovascular disease. Participants voted via Zoom poll using a five-point Likert scale (strongly disagree, disagree, neutral, agree, and strongly agree). Participants were limited to using a vote of strongly agree or agree a maximum of five times to facilitate generating a prioritized list with a gradient. The anonymized results were displayed in real time after each round. If ≥70% agreement was not achieved after the first round (agreeing that the topic is either a priority or not a priority), a short (<10-min) discussion ensued to highlight key strengths and limitations of the proposed topic. A second and final poll was then launched. Additional details about the polling are provided in the data supplement.
A co-chair (M.D.) performed the data analysis. The summative Likert score from the final round was calculated, where strongly disagree was assigned a score of −2, disagree was assigned a score of −1, neutral was assigned a score of 0, agree was assigned a score of +1, and strongly agree was assigned a score of +2. The final list of prioritized topics was then ordered by the summative Likert score from the final round. Results were sent to participants electronically for feedback. The lead co-chair (L.C.M.) drafted the initial version of the manuscript, which was then circulated to the full committee and iteratively revised. The ATS Board of Directors approved the final document. This document does not include clinical treatment recommendations.
Committee members were diverse in terms of sex, race and ethnicity, geographic location, medical specialty, clinical discipline, research experience/expertise, and perspective ( Table 1 ). The literature search for COPD and cardiovascular disease yielded 97 articles (34 for diagnosis, 43 for management, and 20 for rehabilitation). The key citations that informed Day 1’s content are highlighted in Table 2 .
Committee Member | Institution | Role | Area of Expertise |
---|---|---|---|
Laura Myers, M.D. | Kaiser Permanente Northern California | Co-chair, host | Overlap of COPD and cardiovascular disease, COPD readmissions, patient quality, safety, and policy |
Valerie G. Press, M.D., M.P.H. | University of Chicago | Co-chair, Delphi facilitator | COPD medications, treatment adherence, health literacy, COPD readmissions |
Miguel Divo, M.D., M.P.H. | Harvard University | Co-chair, speaker, data analyst | COPD and multimorbidity |
Jennifer Quint, M.D., Ph.D. | Imperial College London, United Kingdom | Member, speaker | Epidemiology of COPD and cardiovascular disease |
Peter Lindenauer, M.D. | Baystate Health | Member, speaker | COPD and pulmonary rehabilitation |
Nirupama Putcha, M.D., M.H.S. | Johns Hopkins Medicine | Member, speaker | Multimorbidity, health disparities |
Alan Hamilton, Ph.D. | COPD Foundation | Member, speaker | Pharmacological and nonpharmacological interventions for COPD, behavior change for patients with multimorbidity, regulatory drug approval |
Nathaniel M. Hawkins, M.D., M.P.H. | University of British Columbia, Canada | Member, speaker | Heart failure, cardiac arrhythmias, comorbidities |
Caroline Gainer | Patient | Member, speaker | Patient experience and advocacy |
J. Michael Wells, M.D., M.S.P.H. | University of Alabama at Birmingham | Member, discussion moderator | COPD and pulmonary vascular disease; mechanisms of inflammation and vascular remodeling |
David Mannino, M.D. | COPD Foundation | Member, discussion moderator | Epidemiology of COPD, environmental exposures, inflammation |
R. Graham Barr, M.D., Dr.P.H. | Columbia University | Member, discussion moderator | Prospective cohort studies, epidemiology, cardiopulmonary interactions |
Mark Dransfield, M.D. | University of Alabama | Member, discussion moderator | COPD, clinical trials, mechanisms of disease, health system leadership, leadership in multicenter randomized clinical trials |
Sadiya S. Khan, M.D., M.Sc. | Northwestern University Feinberg School of Medicine | Member | Preventive cardiology, screening for comorbidities, epidemiology, prospective cohort studies, member of the American Heart Association |
Sagar Shah, M.D. | Kaiser Permanente Northern California | Member | General internal medicine, referrals to specialists, polypharmacy |
Allan Walkey, M.D. | University of Massachusetts | Member | COPD, practice patterns, health services research, learning health systems |
Surya P. Bhatt, M.D. | University of Alabama at Birmingham | Member | COPD, clinical trials of medications |
Andrea S. Gershon, M.D. | Sunnybrook Research Institute, Canada | Member | COPD outcomes, health services research |
Todd Lee, Pharm.D., Ph.D. | University of Illinois | Member | Pharmacology, COPD medications, polypharmacy |
Huong Q. Nguyen, R.N., Ph.D. | Kaiser Permanente Southern California | Member | COPD, frailty, patient-centered outcomes |
Leah Witt, M.D. | University of San Francisco | Member | Geriatrics, multimorbidity |
Richard Mularski, M.D. | Kaiser Permanente Northwest | Member | COPD and patient-centered outcomes, leadership of clinical trial networks |
Definition of abbreviation : COPD = chronic obstructive pulmonary disease.
Topic | Summary of Key Points | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Burden, impact and complexities of cardiovascular comorbidities in COPD |
|
IMAGES
VIDEO
COMMENTS
A research agenda is a strategic plan that outlines the goals, priorities, and areas of investigation for future research endeavors. It serves as a roadmap, guiding researchers toward meaningful and impactful work. In the context of WHO, a research agenda identifies priority areas and gaps in health information and research, shaping new avenues ...
Creating a research agenda should be a major goal for all graduate students—regardless of theoretical interests, methodological preferences, or career aspirations. A research agenda helps you orient yourself toward both short- and long-term goals; it will guide your selection of classes, help you decide which academic conferences (and within ...
A clear research agenda serves two important purposes. First, it can help you communicate to others what you study and the area in which you have developed (or are developing) expertise. Second, it serves to guide your decision-making about what projects or specific research questions to pursue. Ultimately, you must be interested in and excited ...
A research agenda provides a map for your career—or at least for. the next few years (e.g., 3-5) of your career. Typically your research agenda will. include a set of questions, issues, or ...
A research agenda helps ensure that you will hit the ground running and that you know exactly what steps you need to take to move your career forward. However, the more important reason for developing a research agenda is internal—a research agenda gives you both purpose and focus. This is important for a couple of reasons.
Creating a research agenda should be a major goal for all graduate students — regardless of theoretical interests, methodological preferences, or career aspirations. A research agenda helps you orient yourself toward both short- and long-term goals; it will guide your selection of classes, help you decide which academic conferences (and ...
Abstract. Purpose/objectives: The purpose of this article is to describe the evolution and results of the process for establishing a research agenda and identification of research priorities for clinical nurse specialists, approved by the National Association of Clinical Nurse Specialists (NACNS) membership and sanctioned by the NACNS Board of ...
The approach provides a systematic guide to assist planning and implementing a quality research priority-setting exercise that will match the context. The resulting exercise should contain legitimate and credible priorities that have been developed in an ethical and equitable manner. The objectives will support achievement of WHO's Triple ...
Moreover, as the research agenda was developed, team members were proud to be identified and selected as formal leaders based on their science contributions, embraced their increased "accountability", and began to view development of the team research agenda as their core role - "our job". Regarding research agenda design, the data ...
Despite the global efforts in research, training, and control over the years, tropical diseases remain a major cause of ill-health in poor populations. An estimated 2.7 billion people living on less than US$2 per day, in both rural and impoverished urban areas of low-income countries, are affected [1], [2]. Thirteen neglected tropical diseases ...
Your research agenda plays a critical role in designing and planning your scholarly research and publication activities. Establishing your research agenda means deciding which research areas you will explore and the methodologies you will employ, then letting these guide your research activities. As we have all probably heard from our own graduate school professors, it
Abstract. In the midst of a range of unprecedented global events, geopolitical developments, and health, climate, socio-economic, and humanitarian crises, which have challenged assumptions about the nature and purpose of education, this chapter frames the book as an attempt to engage with a window of opportunity for shaping educational ...
Producing a research agenda can facilitate the identification of challenges, from defining and building appropriate data sets, to performing practice-based research with high internal validity, and to producing results that are generalizable. Using the research agenda as a template for building the evidence base on AHDs also presents ...
Developing the descriptive academic impact analysis of paediatric research agendas. In preparation for creating the impact tool, we defined the academic impact of PRAs using three identifiable factors: [] The number of citations referencing the agenda, [] The number of new studies based on the priorities, and [] The variation in authorship between the original PRA and the subsequent studies.
Mechanisms Network of Expertise (MNoE) The research agenda was developed by the Mechanisms Network of Expertise (MNoE). The MNoE is composed of over 40 invited implementation scientists who are diverse with respect to several dimensions (e.g., gender, race/ethnicity, stage of career, focus on priority populations, research settings), but who are predominantly United States (US)-based (4 ...
A research agenda plays a valuable role in helping design scholarly activities for graduate students and faculty. Simply put, a research agenda means identifying the areas you will research and the methodologies you will use to answer questions. You probably have heard from professors in graduate school and beyond that you can't research everything so you need to pick what you can feasibly ...
One person I spoke to said that a research plan should be "about three pages of 1.5-spaced text, and NEVER more than five." Another source prefers "three semi-independent (but related) sub-proposals not more than about three to four pages (single-spaced) each with a half page of important and relevant references."
This research statement sets forth a prioritized research agenda with the purpose of stimulating high-impact research and improving the lives of patients with COPD and cardiovascular disease. Although topics with lower priority ratings were still deemed to be important by the committee, pragmatic clinical trials with patient-centered outcomes ...
Developing a Research Agenda. One of the most rewarding aspects of a career in academia is generating new knowledge. Graduate students learn the research process, and new faculty members begin a journey as researchers. While there is a lot of leeway concerning what is studied and research methodology, all faculty members in the California State ...
iii Using the facilitator's guide. The purpose of a research agenda setting workshop is to engage participants (for example, a state or local education agency, or a research alliance) in a collaborative process to identify research priorities and develop a set of research questions aligned to these priorities.
Design your research agenda. This activity will help you develop a document articulating how you will undertake your research, the methods you will use, the data you will collect, and the learning outputs you intend to produce. The purpose of Research Design is to provide a core document to share with team members and other stakeholders ...
As part of the U.S. government response to the current mpox outbreak, the National Institutes of Health's (NIH) National Institute of Allergy and Infectious Diseases (NIAID) has released an update on its priorities for mpox research.The NIAID mpox research agenda focuses on four key objectives: increasing knowledge about the biology of all clades—also known as strains—of the virus that ...
Dec 4, 2018. 142. A research agenda identifies research priorities which will lead to more successful research, outlining a clear framework for making decisions about future research activities ...
The first step in implementing a research agenda is to define the goals and desired outcomes of the implementation. This vision statement will reflect the research agenda's purpose within an organization or a group of partner organizations. The vision statement should reflect the desired future for the research agenda.
The NIAID mpox research agenda focuses on four key objectives: increasing knowledge about the biology of all clades—also known as strains—of the virus that causes mpox, including how the virus is transmitted and how people's immune systems respond to it; evaluating dosing regimens of current vaccines to stretch the vaccine supply and ...
Based on the current state-of-the-art literature, this paper aims to provide a comprehensive overview of XAI for RS and its ethical implications, with the aim of proposing a research agenda for ethical RS based on XAI.
The reviewed studies placed more emphasis on predictors in resources clusters than on those in demands/barriers clusters. Research strengths, gaps and inconsistencies in the literature were identified and discussed. Accordingly, an agenda was developed to highlight opportunities for theoretical and empirical advancement for future research.
A research agenda is an action plan that outlines the tasks that need to be prioritised within a particular field of study or research and the execution system of these tasks. This agenda is a guiding tool for students with specific interests in a subject or field of research who intend to attain predetermined objectives by the end of their ...
While commending the high quality of Bayes research, Leanne urged colleagues to consider factors such as external research funding, impact, and stakeholder engagement when planning their work. The event continued with a Panel Discussion on key issues related to the School's four cross-cutting research themes.
Each semester the OVCRI offers a Responsible Conduct of Research (RCR) Speaker Series to help researchers at all levels understand their responsibilities and to promote integrity and ethical practices on our campus and beyond.This series is intended to support the specific RCR education requirements of the National Science Foundation (NSF), the National Institutes of Health (NIH), and the U.S ...