⊕⊕⊕⊕
⊕⊕⊕◯
⊕⊕◯◯
⊕◯◯◯
We now describe in more detail the five reasons (or domains) for downgrading the certainty of a body of evidence for a specific outcome. In each case, if no reason is found for downgrading the evidence, it should be classified as 'no limitation or not serious' (not important enough to warrant downgrading). If a reason is found for downgrading the evidence, it should be classified as 'serious' (downgrading the certainty rating by one level) or 'very serious' (downgrading the certainty grade by two levels). For non-randomized studies assessed with ROBINS-I, rating down by three levels should be classified as 'extremely' serious.
(1) Risk of bias or limitations in the detailed design and implementation
Our confidence in an estimate of effect decreases if studies suffer from major limitations that are likely to result in a biased assessment of the intervention effect. For randomized trials, these methodological limitations include failure to generate a random sequence, lack of allocation sequence concealment, lack of blinding (particularly with subjective outcomes that are highly susceptible to biased assessment), a large loss to follow-up or selective reporting of outcomes. Chapter 8 provides a discussion of study-level assessments of risk of bias in the context of a Cochrane Review, and proposes an approach to assessing the risk of bias for an outcome across studies as ‘Low’ risk of bias, ‘Some concerns’ and ‘High’ risk of bias for randomized trials. Levels of ‘Low’. ‘Moderate’, ‘Serious’ and ‘Critical’ risk of bias arise for non-randomized studies assessed with ROBINS-I ( Chapter 25 ). These assessments should feed directly into this GRADE domain. In particular, ‘Low’ risk of bias would indicate ‘no limitation’; ‘Some concerns’ would indicate either ‘no limitation’ or ‘serious limitation’; and ‘High’ risk of bias would indicate either ‘serious limitation’ or ‘very serious limitation’. ‘Critical’ risk of bias on ROBINS-I would indicate extremely serious limitations in GRADE. Review authors should use their judgement to decide between alternative categories, depending on the likely magnitude of the potential biases.
Every study addressing a particular outcome will differ, to some degree, in the risk of bias. Review authors should make an overall judgement on whether the certainty of evidence for an outcome warrants downgrading on the basis of study limitations. The assessment of study limitations should apply to the studies contributing to the results in the ‘Summary of findings’ table, rather than to all studies that could potentially be included in the analysis. We have argued in Chapter 7, Section 7.6.2 , that the primary analysis should be restricted to studies at low (or low and unclear) risk of bias where possible.
Table 14.2.a presents the judgements that must be made in going from assessments of the risk of bias to judgements about study limitations for each outcome included in a ‘Summary of findings’ table. A rating of high certainty evidence can be achieved only when most evidence comes from studies that met the criteria for low risk of bias. For example, of the 22 studies addressing the impact of beta-blockers on mortality in patients with heart failure, most probably or certainly used concealed allocation of the sequence, all blinded at least some key groups and follow-up of randomized patients was almost complete (Brophy et al 2001). The certainty of evidence might be downgraded by one level when most of the evidence comes from individual studies either with a crucial limitation for one item, or with some limitations for multiple items. An example of very serious limitations, warranting downgrading by two levels, is provided by evidence on surgery versus conservative treatment in the management of patients with lumbar disc prolapse (Gibson and Waddell 2007). We are uncertain of the benefit of surgery in reducing symptoms after one year or longer, because the one study included in the analysis had inadequate concealment of the allocation sequence and the outcome was assessed using a crude rating by the surgeon without blinding.
(2) Unexplained heterogeneity or inconsistency of results
When studies yield widely differing estimates of effect (heterogeneity or variability in results), investigators should look for robust explanations for that heterogeneity. For instance, drugs may have larger relative effects in sicker populations or when given in larger doses. A detailed discussion of heterogeneity and its investigation is provided in Chapter 10, Section 10.10 and Section 10.11 . If an important modifier exists, with good evidence that important outcomes are different in different subgroups (which would ideally be pre-specified), then a separate ‘Summary of findings’ table may be considered for a separate population. For instance, a separate ‘Summary of findings’ table would be used for carotid endarterectomy in symptomatic patients with high grade stenosis (70% to 99%) in which the intervention is, in the hands of the right surgeons, beneficial, and another (if review authors considered it relevant) for asymptomatic patients with low grade stenosis (less than 30%) in which surgery appears harmful (Orrapin and Rerkasem 2017). When heterogeneity exists and affects the interpretation of results, but review authors are unable to identify a plausible explanation with the data available, the certainty of the evidence decreases.
(3) Indirectness of evidence
Two types of indirectness are relevant. First, a review comparing the effectiveness of alternative interventions (say A and B) may find that randomized trials are available, but they have compared A with placebo and B with placebo. Thus, the evidence is restricted to indirect comparisons between A and B. Where indirect comparisons are undertaken within a network meta-analysis context, GRADE for network meta-analysis should be used (see Chapter 11, Section 11.5 ).
Second, a review may find randomized trials that meet eligibility criteria but address a restricted version of the main review question in terms of population, intervention, comparator or outcomes. For example, suppose that in a review addressing an intervention for secondary prevention of coronary heart disease, most identified studies happened to be in people who also had diabetes. Then the evidence may be regarded as indirect in relation to the broader question of interest because the population is primarily related to people with diabetes. The opposite scenario can equally apply: a review addressing the effect of a preventive strategy for coronary heart disease in people with diabetes may consider studies in people without diabetes to provide relevant, albeit indirect, evidence. This would be particularly likely if investigators had conducted few if any randomized trials in the target population (e.g. people with diabetes). Other sources of indirectness may arise from interventions studied (e.g. if in all included studies a technical intervention was implemented by expert, highly trained specialists in specialist centres, then evidence on the effects of the intervention outside these centres may be indirect), comparators used (e.g. if the comparator groups received an intervention that is less effective than standard treatment in most settings) and outcomes assessed (e.g. indirectness due to surrogate outcomes when data on patient-important outcomes are not available, or when investigators seek data on quality of life but only symptoms are reported). Review authors should make judgements transparent when they believe downgrading is justified, based on differences in anticipated effects in the group of primary interest. Review authors may be aided and increase transparency of their judgements about indirectness if they use Table 14.2.b available in the GRADEpro GDT software (Schünemann et al 2013).
(4) Imprecision of results
When studies include few participants or few events, and thus have wide confidence intervals, review authors can lower their rating of the certainty of the evidence. The confidence intervals included in the ‘Summary of findings’ table will provide readers with information that allows them to make, to some extent, their own rating of precision. Review authors can use a calculation of the optimal information size (OIS) or review information size (RIS), similar to sample size calculations, to make judgements about imprecision (Guyatt et al 2011b, Schünemann 2016). The OIS or RIS is calculated on the basis of the number of participants required for an adequately powered individual study. If the 95% confidence interval excludes a risk ratio (RR) of 1.0, and the total number of events or patients exceeds the OIS criterion, precision is adequate. If the 95% CI includes appreciable benefit or harm (an RR of under 0.75 or over 1.25 is often suggested as a very rough guide) downgrading for imprecision may be appropriate even if OIS criteria are met (Guyatt et al 2011b, Schünemann 2016).
(5) High probability of publication bias
The certainty of evidence level may be downgraded if investigators fail to report studies on the basis of results (typically those that show no effect: publication bias) or outcomes (typically those that may be harmful or for which no effect was observed: selective outcome non-reporting bias). Selective reporting of outcomes from among multiple outcomes measured is assessed at the study level as part of the assessment of risk of bias (see Chapter 8, Section 8.7 ), so for the studies contributing to the outcome in the ‘Summary of findings’ table this is addressed by domain 1 above (limitations in the design and implementation). If a large number of studies included in the review do not contribute to an outcome, or if there is evidence of publication bias, the certainty of the evidence may be downgraded. Chapter 13 provides a detailed discussion of reporting biases, including publication bias, and how it may be tackled in a Cochrane Review. A prototypical situation that may elicit suspicion of publication bias is when published evidence includes a number of small studies, all of which are industry-funded (Bhandari et al 2004). For example, 14 studies of flavanoids in patients with haemorrhoids have shown apparent large benefits, but enrolled a total of only 1432 patients (i.e. each study enrolled relatively few patients) (Alonso-Coello et al 2006). The heavy involvement of sponsors in most of these studies raises questions of whether unpublished studies that suggest no benefit exist (publication bias).
A particular body of evidence can suffer from problems associated with more than one of the five factors listed here, and the greater the problems, the lower the certainty of evidence rating that should result. One could imagine a situation in which randomized trials were available, but all or virtually all of these limitations would be present, and in serious form. A very low certainty of evidence rating would result.
Table 14.2.a Further guidelines for domain 1 (of 5) in a GRADE assessment: going from assessments of risk of bias in studies to judgements about study limitations for main outcomes across studies
|
|
|
|
|
Low risk of bias | Most information is from results at low risk of bias. | Plausible bias unlikely to seriously alter the results. | No apparent limitations. | No serious limitations, do not downgrade. |
Some concerns | Most information is from results at low risk of bias or with some concerns. | Plausible bias that raises some doubt about the results. | Potential limitations are unlikely to lower confidence in the estimate of effect. | No serious limitations, do not downgrade. |
Potential limitations are likely to lower confidence in the estimate of effect. | Serious limitations, downgrade one level. | |||
High risk of bias | The proportion of information from results at high risk of bias is sufficient to affect the interpretation of results. | Plausible bias that seriously weakens confidence in the results. | Crucial limitation for one criterion, or some limitations for multiple criteria, sufficient to lower confidence in the estimate of effect. | Serious limitations, downgrade one level. |
Crucial limitation for one or more criteria sufficient to substantially lower confidence in the estimate of effect. | Very serious limitations, downgrade two levels. |
Table 14.2.b Judgements about indirectness by outcome (available in GRADEpro GDT)
| |||||
|
|
| |||
| Probably yes | Probably no | No | ||
|
|
|
|
Intervention:
Yes | Probably yes | Probably no | No |
|
|
|
|
Comparator:
Direct comparison:
Final judgement about indirectness across domains:
|
|
|
Although NRSI and downgraded randomized trials will generally yield a low rating for certainty of evidence, there will be unusual circumstances in which review authors could ‘upgrade’ such evidence to moderate or even high certainty ( Table 14.3.a ).
Review authors should report the grading of the certainty of evidence in the Results section for each outcome for which this has been performed, providing the rationale for downgrading or upgrading the evidence, and referring to the ‘Summary of findings’ table where applicable.
Table 14.3.a provides a framework and examples for how review authors can justify their judgements about the certainty of evidence in each domain. These justifications should also be included in explanatory notes to the ‘Summary of Findings’ table (see Section 14.1.6.10 ).
Chapter 15, Section 15.6 , describes in more detail how the overall GRADE assessment across all domains can be used to draw conclusions about the effects of the intervention, as well as providing implications for future research.
Table 14.3.a Framework for describing the certainty of evidence and justifying downgrading or upgrading
|
|
|
| Describe the risk of bias based on the criteria used in the risk-of-bias table. | Downgraded because of 10 randomized trials, five did not blind patients and caretakers. |
| Describe the degree of inconsistency by outcome using one or more indicators (e.g. I and P value), confidence interval overlap, difference in point estimate, between-study variance. | Not downgraded because the proportion of the variability in effect estimates that is due to true heterogeneity rather than chance is not important (I = 0%). |
| Describe if the majority of studies address the PICO – were they similar to the question posed? | Downgraded because the included studies were restricted to patients with advanced cancer. |
| Describe the number of events, and width of the confidence intervals. | The confidence intervals for the effect on mortality are consistent with both an appreciable benefit and appreciable harm and we lowered the certainty. |
| Describe the possible degree of publication bias. | 1. The funnel plot of 14 randomized trials indicated that there were several small studies that showed a small positive effect, but small studies that showed no effect or harm may have been unpublished. The certainty of the evidence was lowered. 2. There are only three small positive studies, it appears that studies showing no effect or harm have not been published. There also is for-profit interest in the intervention. The certainty of the evidence was lowered. |
| Describe the magnitude of the effect and the widths of the associate confidence intervals. | Upgraded because the RR is large: 0.3 (95% CI 0.2 to 0.4), with a sufficient number of events to be precise. |
| The studies show a clear relation with increases in the outcome of an outcome (e.g. lung cancer) with higher exposure levels. | Upgraded because the dose-response relation shows a relative risk increase of 10% in never smokers, 15% in smokers of 10 pack years and 20% in smokers of 15 pack years. |
| Describe which opposing plausible biases and confounders may have not been considered. | The estimate of effect is not controlled for the following possible confounders: smoking, degree of education, but the distribution of these factors in the studies is likely to lead to an under-estimate of the true effect. The certainty of the evidence was increased. |
Authors: Holger J Schünemann, Julian PT Higgins, Gunn E Vist, Paul Glasziou, Elie A Akl, Nicole Skoetz, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group (formerly Applicability and Recommendations Methods Group) and the Cochrane Statistical Methods Group
Acknowledgements: Andrew D Oxman contributed to earlier versions. Professor Penny Hawe contributed to the text on adverse effects in earlier versions. Jon Deeks provided helpful contributions on an earlier version of this chapter. For details of previous authors and editors of the Handbook , please refer to the Preface.
Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health.
Alonso-Coello P, Zhou Q, Martinez-Zapata MJ, Mills E, Heels-Ansdell D, Johanson JF, Guyatt G. Meta-analysis of flavonoids for the treatment of haemorrhoids. British Journal of Surgery 2006; 93 : 909-920.
Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.
Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology 2011; 64 : 401-406.
Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. Canadian Medical Association Journal 2004; 170 : 477-480.
Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Annals of Internal Medicine 2001; 134 : 550-560.
Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P, Meerpohl JJ, Vandvik PO, Brozek JL, Akl EA, Bossuyt P, Churchill R, Glenton C, Rosenbaum S, Tugwell P, Welch V, Garner P, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary of findings tables with a new format. Journal of Clinical Epidemiology 2016; 74 : 7-18.
Deeks JJ, Altman DG. Effect measures for meta-analysis of trials with binary outcomes. In: Egger M, Davey Smith G, Altman DG, editors. Systematic Reviews in Health Care: Meta-analysis in Context . 2nd ed. London (UK): BMJ Publication Group; 2001. p. 313-335.
Devereaux PJ, Choi PT, Lacchetti C, Weaver B, Schünemann HJ, Haines T, Lavis JN, Grant BJ, Haslam DR, Bhandari M, Sullivan T, Cook DJ, Walter SD, Meade M, Khan H, Bhatnagar N, Guyatt GH. A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals. Canadian Medical Association Journal 2002; 166 : 1399-1406.
Engels EA, Schmid CH, Terrin N, Olkin I, Lau J. Heterogeneity and statistical significance in meta-analysis: an empirical study of 125 meta-analyses. Statistics in Medicine 2000; 19 : 1707-1728.
Furukawa TA, Guyatt GH, Griffith LE. Can we individualize the 'number needed to treat'? An empirical study of summary effect measures in meta-analyses. International Journal of Epidemiology 2002; 31 : 72-76.
Gibson JN, Waddell G. Surgical interventions for lumbar disc prolapse: updated Cochrane Review. Spine 2007; 32 : 1735-1747.
Guyatt G, Oxman A, Vist G, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann H. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 3.
Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.
Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, Devereaux PJ, Montori VM, Freyschuss B, Vist G, Jaeschke R, Williams JW, Jr., Murad MH, Sinclair D, Falck-Ytter Y, Meerpohl J, Whittington C, Thorlund K, Andrews J, Schünemann HJ. GRADE guidelines 6. Rating the quality of evidence--imprecision. Journal of Clinical Epidemiology 2011b; 64 : 1283-1293.
Iorio A, Spencer FA, Falavigna M, Alba C, Lang E, Burnand B, McGinn T, Hayden J, Williams K, Shea B, Wolff R, Kujpers T, Perel P, Vandvik PO, Glasziou P, Schünemann H, Guyatt G. Use of GRADE for assessment of evidence about prognosis: rating confidence in estimates of event rates in broad categories of patients. BMJ 2015; 350 : h870.
Langendam M, Carrasco-Labra A, Santesso N, Mustafa RA, Brignardello-Petersen R, Ventresca M, Heus P, Lasserson T, Moustgaard R, Brozek J, Schünemann HJ. Improving GRADE evidence tables part 2: a systematic survey of explanatory notes shows more guidance is needed. Journal of Clinical Epidemiology 2016; 74 : 19-27.
Levine MN, Raskob G, Landefeld S, Kearon C, Schulman S. Hemorrhagic complications of anticoagulant treatment: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest 2004; 126 : 287S-310S.
Orrapin S, Rerkasem K. Carotid endarterectomy for symptomatic carotid stenosis. Cochrane Database of Systematic Reviews 2017; 6 : CD001081.
Salpeter S, Greyber E, Pasternak G, Salpeter E. Risk of fatal and nonfatal lactic acidosis with metformin use in type 2 diabetes mellitus. Cochrane Database of Systematic Reviews 2007; 4 : CD002967.
Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.
Schünemann HJ, Best D, Vist G, Oxman AD, Group GW. Letters, numbers, symbols and words: how to communicate grades of evidence and recommendations. Canadian Medical Association Journal 2003; 169 : 677-680.
Schünemann HJ, Jaeschke R, Cook DJ, Bria WF, El-Solh AA, Ernst A, Fahy BF, Gould MK, Horan KL, Krishnan JA, Manthous CA, Maurer JR, McNicholas WT, Oxman AD, Rubenfeld G, Turino GM, Guyatt G. An official ATS statement: grading the quality of evidence and strength of recommendations in ATS guidelines and recommendations. American Journal of Respiratory and Critical Care Medicine 2006; 174 : 605-614.
Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Jaeschke R, Vist GE, Williams JW, Jr., Kunz R, Craig J, Montori VM, Bossuyt P, Guyatt GH. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies. BMJ 2008a; 336 : 1106-1110.
Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Bossuyt P, Chang S, Muti P, Jaeschke R, Guyatt GH. GRADE: assessing the quality of evidence for diagnostic recommendations. ACP Journal Club 2008b; 149 : 2.
Schünemann HJ, Mustafa R, Brozek J. [Diagnostic accuracy and linked evidence--testing the chain]. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2012; 106 : 153-160.
Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.
Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.
Schünemann HJ, Cuello C, Akl EA, Mustafa RA, Meerpohl JJ, Thayer K, Morgan RL, Gartlehner G, Kunz R, Katikireddi SV, Sterne J, Higgins JPT, Guyatt G, Group GW. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. Journal of Clinical Epidemiology 2018.
Spencer-Bonilla G, Quinones AR, Montori VM, International Minimally Disruptive Medicine W. Assessing the Burden of Treatment. Journal of General Internal Medicine 2017; 32 : 1141-1145.
Spencer FA, Iorio A, You J, Murad MH, Schünemann HJ, Vandvik PO, Crowther MA, Pottie K, Lang ES, Meerpohl JJ, Falck-Ytter Y, Alonso-Coello P, Guyatt GH. Uncertainties in baseline risk estimates and confidence in treatment effects. BMJ 2012; 345 : e7401.
Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JPT. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355 : i4919.
Thompson DC, Rivara FP, Thompson R. Helmets for preventing head and facial injuries in bicyclists. Cochrane Database of Systematic Reviews 2000; 2 : CD001855.
Tierney JF, Stewart LA, Ghersi D, Burdett S, Sydes MR. Practical methods for incorporating summary time-to-event data into meta-analysis. Trials 2007; 8 .
van Dalen EC, Tierney JF, Kremer LCM. Tips and tricks for understanding and using SR results. No. 7: time‐to‐event data. Evidence-Based Child Health 2007; 2 : 1089-1090.
For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.
https://doi.org/10.1136/ebnurs-2021-103417
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research. 1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis in reviews, the use of literature summary tables is of utmost importance. A literature summary table provides a synopsis of an included article. It succinctly presents its purpose, methods, findings and other relevant information pertinent to the review. The aim of developing these literature summary tables is to provide the reader with the information at one glance. Since there are multiple types of reviews (eg, systematic, integrative, scoping, critical and mixed methods) with distinct purposes and techniques, 2 there could be various approaches for developing literature summary tables making it a complex task specialty for the novice researchers or reviewers. Here, we offer five tips for authors of the review articles, relevant to all types of reviews, for creating useful and relevant literature summary tables. We also provide examples from our published reviews to illustrate how useful literature summary tables can be developed and what sort of information should be provided.
Tabular literature summaries from a scoping review. Source: Rasheed et al . 3
The provision of information about conceptual and theoretical frameworks and methods is useful for several reasons. First, in quantitative (reviews synthesising the results of quantitative studies) and mixed reviews (reviews synthesising the results of both qualitative and quantitative studies to address a mixed review question), it allows the readers to assess the congruence of the core findings and methods with the adapted framework and tested assumptions. In qualitative reviews (reviews synthesising results of qualitative studies), this information is beneficial for readers to recognise the underlying philosophical and paradigmatic stance of the authors of the included articles. For example, imagine the authors of an article, included in a review, used phenomenological inquiry for their research. In that case, the review authors and the readers of the review need to know what kind of (transcendental or hermeneutic) philosophical stance guided the inquiry. Review authors should, therefore, include the philosophical stance in their literature summary for the particular article. Second, information about frameworks and methods enables review authors and readers to judge the quality of the research, which allows for discerning the strengths and limitations of the article. For example, if authors of an included article intended to develop a new scale and test its psychometric properties. To achieve this aim, they used a convenience sample of 150 participants and performed exploratory (EFA) and confirmatory factor analysis (CFA) on the same sample. Such an approach would indicate a flawed methodology because EFA and CFA should not be conducted on the same sample. The review authors must include this information in their summary table. Omitting this information from a summary could lead to the inclusion of a flawed article in the review, thereby jeopardising the review’s rigour.
Critical appraisal of individual articles included in a review is crucial for increasing the rigour of the review. Despite using various templates for critical appraisal, authors often do not provide detailed information about each reviewed article’s strengths and limitations. Merely noting the quality score based on standardised critical appraisal templates is not adequate because the readers should be able to identify the reasons for assigning a weak or moderate rating. Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. It is also vital that methodological and conceptual limitations and strengths of the articles included in the review are provided because not all review articles include empirical research papers. Rather some review synthesises the theoretical aspects of articles. Providing information about conceptual limitations is also important for readers to judge the quality of foundations of the research. For example, if you included a mixed-methods study in the review, reporting the methodological and conceptual limitations about ‘integration’ is critical for evaluating the study’s strength. Suppose the authors only collected qualitative and quantitative data and did not state the intent and timing of integration. In that case, the strength of the study is weak. Integration only occurred at the levels of data collection. However, integration may not have occurred at the analysis, interpretation and reporting levels.
While reading and evaluating review papers, we have observed that many review authors only provide core results of the article included in a review and do not explain the conceptual contribution offered by the included article. We refer to conceptual contribution as a description of how the article’s key results contribute towards the development of potential codes, themes or subthemes, or emerging patterns that are reported as the review findings. For example, the authors of a review article noted that one of the research articles included in their review demonstrated the usefulness of case studies and reflective logs as strategies for fostering compassion in nursing students. The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing students, as supported by case studies and reflective logs. This conceptual contribution of the article should be mentioned in the literature summary table. Delineating each reviewed article’s conceptual contribution is particularly beneficial in qualitative reviews, mixed-methods reviews, and critical reviews that often focus on developing models and describing or explaining various phenomena. Figure 2 offers an example of a literature summary table. 4
Tabular literature summaries from a critical review. Source: Younas and Maddigan. 4
While developing literature summary tables, many authors use themes or subthemes reported in the given articles as the key results of their own review. Such an approach prevents the review authors from understanding the article’s conceptual contribution, developing rigorous synthesis and drawing reasonable interpretations of results from an individual article. Ultimately, it affects the generation of novel review findings. For example, one of the articles about women’s healthcare-seeking behaviours in developing countries reported a theme ‘social-cultural determinants of health as precursors of delays’. Instead of using this theme as one of the review findings, the reviewers should read and interpret beyond the given description in an article, compare and contrast themes, findings from one article with findings and themes from another article to find similarities and differences and to understand and explain bigger picture for their readers. Therefore, while developing literature summary tables, think twice before using the predeveloped themes. Including your themes in the summary tables (see figure 1 ) demonstrates to the readers that a robust method of data extraction and synthesis has been followed.
Often templates are available for data extraction and development of literature summary tables. The available templates may be in the form of a table, chart or a structured framework that extracts some essential information about every article. The commonly used information may include authors, purpose, methods, key results and quality scores. While extracting all relevant information is important, such templates should be tailored to meet the needs of the individuals’ review. For example, for a review about the effectiveness of healthcare interventions, a literature summary table must include information about the intervention, its type, content timing, duration, setting, effectiveness, negative consequences, and receivers and implementers’ experiences of its usage. Similarly, literature summary tables for articles included in a meta-synthesis must include information about the participants’ characteristics, research context and conceptual contribution of each reviewed article so as to help the reader make an informed decision about the usefulness or lack of usefulness of the individual article in the review and the whole review.
In conclusion, narrative or systematic reviews are almost always conducted as a part of any educational project (thesis or dissertation) or academic or clinical research. Literature reviews are the foundation of research on a given topic. Robust and high-quality reviews play an instrumental role in guiding research, practice and policymaking. However, the quality of reviews is also contingent on rigorous data extraction and synthesis, which require developing literature summaries. We have outlined five tips that could enhance the quality of the data extraction and synthesis process by developing useful literature summaries.
Twitter @Ahtisham04, @parveenazamali
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
What are systematic reviews, polit–beck evidence hierarchy/levels of evidence scale for therapy questions.
"Figure 2.2 [in context of book] shows our eight-level evidence hierarchy for Therapy/intervention questions. This hierarchy ranks sources of evidence with respect the readiness of an intervention to be put to use in practice" (Polit & Beck, 2021, p. 28). Levels are ranked on risk of bias - level one being the least bias, level eight being the most biased. There are several types of levels of evidence scales designed for answering different questions. "An evidence hierarchy for Prognosis questions, for example, is different from the hierarchy for Therapy questions" (p. 29).
"Through controls imposed by manipulation, comparison, and randomization, alternative explanations can be discredited. It is because of this strength that meta-analyses of RCTs, which integrate evidence from multiple experiments, are at the pinnacle of the evidence hierarchies for Therapy questions" (p. 188).
"Tip: Traditional evidence hierarchies or level of evidence scales (e.g., Figure 2.2), rank evidence sources almost exclusively based on the risk of internal validity threats" (p. 217).
Systematic reviews can provide researchers with knowledge that prior evidence shows. This can help clarify established efficacy of a treatment without unnecessary and thus unethical research. Greenhalgh (2019) illustrates this citing Dean Fergusson and colleagues (2005) systematic review on a clinical surgical topic (p. 128).
Regarding the importance of real-world clinical practice settings, and the conflicting tradeoffs between internal and external validity, Polit and Beck (2021) write, "the first (and most prevalent) approach is to emphasize one and sacrifice another. Most often, it is external validity that is sacrificed. For example, external validity is not even considered in ranking evidence in level of evidence scales" (p. 221). ... From an EBP perspective, it is important to remember that drawing inferences about causal relationships relies not only on how high up on the evidence hierarchy a study is (Figure 2.2), but also, for any given level of the hierarchy, how successful the researcher was in managing study validity and balancing competing validity demands" (p. 222).
Polit and Beck note Levin (2014) that an evidence hierarchy "is not meant to provide a quality rating for evidence retrieved in the search for an answer" (p. 6), and as the Oxford Center for Evidence-Based Medicine concurs that evidence scales are, 'NOT intended to provide you with a definitive judgment about the quality of the evidence. There will inevitably be cases where "lower-level" evidence...will provide stronger than a "higher level" study (Howick et al., 2011, p.2)'" (p. 30).
Level of evidence (e.g., Figure 2.2) + Quality of evidence = Strength of evidence .
"The 6S hierarchy does not imply a gradient of evidence in terms of quality , but rather in terms of ease in retrieving relevant evidence to address a clinical question. At all levels, the evidence should be assessed for quality and relevance" (Polit & Beck, 2021, p. 24, Tip box).
The 6S Pyramid proposes a structure of quantitative evidence where articles that include pre-appraised and pre-synthesized studies are located at the top of the hierarchy (McMaster U., n.d.).
It can help to consider the level of evidence that a document represents, for example, a scientific article that summarizes and analyses many similar articles may provide more insight than the conclusion of a single research article. This is not to say that summaries can not be flawed, nor does it suggest that rare case studies should be ignored. The aim of health research is the well-being of all people, therefore it is important to use current evidence in light of patient preferences negotiated with clinical expertise.
While it is accepted that the strongest evidence is derived from meta-analyses, various evidence grading systems exist. for example: The Johns Hopkins Nursing Evidence-Based Practice model ranks evidence from level I to level V, as follows (Seben et al., 2010): Level I: Meta-analysis of randomized clinical trials (RCTs); experimental studies; RCTs Level II: Quasi-experimental studies Level III: Non-experimental or qualitative studies Level IV: Opinions of nationally recognized experts based on research evidence or an expert consensus panel Level V: Opinions of individual experts based on non-research evidence (e.g., case studies, literature reviews, organizational experience, and personal experience) The American Association of Critical-Care Nurses (AACN) evidence level system , updated in 2009, ranks evidence as follows (Armola et al., 2009): Level A: Meta-analysis of multiple controlled studies or meta-synthesis of qualitative studies with results that consistently support a specific action, intervention, or treatment Level B: Well-designed, controlled randomized or non-randomized studies with results that consistently support a specific action, intervention, or treatment Level C: Qualitative, descriptive, or correlational studies, integrative or systematic reviews, or RCTs with inconsistent results Level D: Peer-reviewed professional organizational standards, with clinical studies to support recommendations Level E: Theory-based evidence from expert opinion or multiple case reports Level M: Manufacturers’ recommendations (2017)
Unfiltered are resources that are primary sources describing original research. Randomized controlled trials, cohort studies, case-controlled studies, and case series/reports are considered unfiltered information.
Filtered are resources that are secondary sources which summarize and analyze the available evidence. They evaluate the quality of individual studies and often provide recommendations for practice. Systematic reviews, critically-appraised topics, and critically-appraised individual articles are considered filtered information.
Armola, R. R., Bourgault, A. M., Halm, M. A., Board, R. M., Bucher, L., Harrington, L., ... Medina, J. (2009). AACN levels of evidence. What's new? Critical Care Nurse , 29 (4), 70-73. doi:10.4037/ccn2009969
DiCenso, A., Bayley, L., & Haynes, R. B. (2009). Accessing pre-appraised evidence: Fine-tuning the 5S model into a 6S model. BMJ Evidence-Based Nursing , 12 (4) https://ebn.bmj.com/content/12/4/99.2.short
Fergusson, D., Glass, K. C., Hutton, B., & Shapiro, S. (2005). Randomized controlled trials of Aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding?. Clinical Trials , 2 (3), 218-232.
Glover, J., Izzo, D., Odato, K. & Wang, L. (2008). Evidence-based mental health resources . EBM Pyramid and EBM Page Generator. Copyright 2008. All Rights Reserved. Retrieved April 28, 2020 from https://web.archive.org/web/20200219181415/http://www.dartmouth.edu/~biomed/resources.htmld/guides/ebm_psych_resources.html Note. Document removed from host. Old link used with the WayBack Machine of the Internet Archive to retrieve the original webpage on 2/10/21 http://www.dartmouth.edu/~biomed/resources.htmld/guides/ebm_psych_resources.html
Greenhalgh, T. (2019). How to read a paper: The basics of evidence-based medicine and healthcare . (Sixth ed.). Wiley Blackwell.
Haynes, R. B. (2001). Of studies, syntheses, synopses, and systems: The “4S” evolution of services for finding current best evidence. BMJ Evidence-Based Medicine , 6 (2), 36-38.
Haynes, R. B. (2006). Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. BMJ Evidence-Based Medicine , 11 (6), 162-164.
McMaster University (n.d.). 6S Search Pyramid Tool https://www.nccmt.ca/capacity-development/6s-search-pyramid
Polit, D., & Beck, C. (2019). Nursing research: Generating and assessing evidence for nursing practice . Wolters Kluwer Health.
Schub, E., Walsh, K. & Pravikoff D. (Ed.) (2017). Evidence-based nursing practice: Implementing [Skill Set]. Nursing Reference Center Plus
Seben, S., March, K. S., & Pugh, L. C. (2010). Evidence-based practice: The forum approach. American Nurse Today , 5 (11), 32-34.
Cochrane [Username]. (2016, Jan 27). What are systematic reviews? YouTube. https://www.youtube.com/watch?v=egJlW4vkb1Y
Davies, A. (2019). Carrying out systematic literature reviews: An introduction. British Journal of Nursing , 28 (15), 1008–1014. https://doi-org.ezproxy.simmons.edu/10.12968/bjon.2019.28.15.1008
Greenhalgh, T. (2019). Papers that summarize other papers (systematic reviews and meta-analyses). In How to read a Paper : The basics of evidence-based medicine and healthcare . (Sixth ed., pp. 117-136). Wiley Blackwell.
Holly, C. (2017). Systematic review. In J. Fitzpatrick (Ed.), Encyclopedia of nursing research (4th ed.). Springer Publishing Company. Credo Reference.
Zhang, J., Han, L., Shields, L., Tian, J., & Wang, J. (2019). A PRISMA assessment of the reporting quality of systematic reviews of nursing published in the Cochrane Library and paper-based journals. Medicine , 98 (49), e18099. https://doi.org/10.1097/MD.0000000000018099
LMIC indicates low- and- middle-income country; SR, systematic review.
a This review included distinct conclusions about separate conditions and comparators, and so it appears in this map more than once.
eAppendix 1. Search Strategies
eAppendix 2. Excluded Studies
eAppendix 3. Evidence Table
eAppendix 4. Conditions in Previously Published Map in 2018 and Current Map
eReferences.
Data Sharing Statement
Sign up for emails based on your interests, select your interests.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Others also liked.
Mak S , Allen J , Begashaw M, et al. Use of Massage Therapy for Pain, 2018-2023 : A Systematic Review . JAMA Netw Open. 2024;7(7):e2422259. doi:10.1001/jamanetworkopen.2024.22259
© 2024
Question What is the certainty or quality of evidence in recent systematic reviews for use of massage therapy for painful adult health conditions?
Findings This systematic review identified 129 systematic reviews in a search of the literature published since 2018; of these, 41 assessed the certainty or quality of evidence of their conclusions. Overall, 17 systematic reviews regarding 13 health conditions were mapped, and most reviews concluded that the certainty of evidence was low or very low.
Meaning This study found that despite massage therapy having been the subject of hundreds of randomized clinical trials and dozens of systematic reviews about adult health conditions since 2018, there were few conclusions that had greater than low certainty of evidence.
Importance Massage therapy is a popular treatment that has been advocated for dozens of painful adult health conditions and has a large evidence base.
Objective To map systematic reviews, conclusions, and certainty or quality of evidence for outcomes of massage therapy for painful adult health conditions.
Evidence Review In this systematic review, a computerized search was conducted of PubMed, the Allied and Complementary Medicine Database, the Cumulated Index to Nursing and Allied Health Literature, the Cochrane Database of Systematic Reviews, and Web of Science from 2018 to 2023. Included studies were systematic reviews of massage therapy for pain in adult health conditions that formally rated the certainty, quality, or strength of evidence for conclusions. Studies of sports massage therapy, osteopathy, dry cupping or dry needling, and internal massage therapy (eg, for pelvic floor pain) were ineligible, as were self-administered massage therapy techniques, such as foam rolling. Reviews were categorized as those with at least 1 conclusion rated as high-certainty evidence, at least 1 conclusion rated as moderate-certainty evidence, and all conclusions rated as low- or very low–certainty evidence; a full list of conclusions and certainty of evidence was collected.
Findings A total of 129 systematic reviews of massage therapy for painful adult health conditions were found; of these, 41 reviews used a formal method to rate certainty or quality of evidence of their conclusions and 17 reviews were mapped, covering 13 health conditions. Across these reviews, no conclusions were rated as high certainty of evidence. There were 7 conclusions that were rated as moderate-certainty evidence; all remaining conclusions were rated as low- or very low–certainty evidence. All conclusions rated as moderate certainty were that massage therapy had a beneficial associations with pain.
Conclusions and Relevance This study found that despite a large number of randomized clinical trials, systematic reviews of massage therapy for painful adult health conditions rated a minority of conclusions as moderate-certainty evidence and that conclusions with moderate- or high-certainty evidence that massage therapy was superior to other active therapies were rare.
Massage therapy is a popular and widely accepted complementary and integrative health modality for individuals seeking relief from pain. 1 This therapy is the practice of manual assessment and manipulation of the superficial soft tissues of skin, muscle, tendon, ligament, and fascia and the structures that lie within the superficial tissues for therapeutic purpose. 2 Individuals may seek massage therapy to address pain where conventional treatments may not always provide complete relief or may come with potential adverse effects. Massage therapy encompasses a range of techniques, styles, and durations and is intended to be delivered by uniquely trained and credentialed therapists. 3 Original research studies have reported on massage therapy delivered by a wide variety of health care professionals, such as physical therapists, physiotherapists, and nurses. 4 , 5 Despite massage therapy’s popularity and long history in practice, evidence of beneficial outcomes associated with massage therapy remains limited.
The Department of Veterans Affairs (VA) previously produced an evidence map of massage therapy for pain, which included systematic reviews published through 2018. 6 An evidence map is a form of systemic review that assesses a broad field to identify the state of the evidence, gaps in knowledge, and future research needs and that presents results in a user-friendly format, often a visual figure or graph. 7 To categorize this evidence base for use in decision-making by policymakers and practitioners, VA policymakers requested a new evidence map of reviews published since 2018 to answer the question “What is the certainty of evidence in systematic reviews of massage therapy for pain?”
This systematic review is an extension of a study commissioned by the VA. While not a full systematic review, this study nevertheless reports methods and results using the Preferred Reporting Items for Systematic Reviews and Meta-analyses ( PRISMA ) reporting guideline where applicable and filed the a priori protocol with the VA Evidence Synthesis Program Coordinating Center. Requirements for review and informed consent were waived because the study was designated as not human participants research.
Literature searches were based on searches used for the evidence map of massage therapy completed in 2018. 8 We searched 5 databases for relevant records published from July 2018 to April 2023 using the search terms “massage,” “acupressure,” “shiatsu,” “myofascial release therapy,” “systematic*,” “metaanaly*,” and similar terms. The databases were PubMed, the Allied and Complementary Medicine Database, the Cumulated Index to Nursing and Allied Health Literature, the Cochrane Database of Systematic Reviews, and Web of Science. See eAppendix 1 in Supplement 1 for full search strategies.
Each title was screened independently by 2 authors for relevance (S.M., J.A., and P.G.S.). Abstracts were then reviewed in duplicate, with any discrepancies resolved by group discussion. To be included, abstracts or titles needed to be about efficacy or effectiveness of massage therapy for a painful adult health condition and be a systematic review with more than 1 study about massage therapy. A systematic review was defined as a review that had a documented systematic method for identifying and critically appraising evidence. In general, any therapist-delivered modality described as massage therapy by review authors was considered eligible (eg, tuina, acupressure, auricular acupressure, reflexology, and myofascial release). Sports massage therapy, osteopathy, dry cupping or dry needling, and internal massage therapy (eg, for pelvic floor pain) were ineligible, as were self-administered massage therapy techniques, like foam rolling. Reviews had to be about a painful condition for adults, and we excluded publications in low- and middle-income countries because of differences in resources for usual care or other active treatments for included conditions. Publications were required to compare massage therapy with sham or placebo massage, usual care, or other active therapies. Systematic reviews that covered other interventions were eligible if results for massage therapy were reported separately.
We next restricted eligibility to reviews that used formal methods to assess the certainty (sometimes called strength or quality) of the evidence for conclusions. In general, this meant using Grading of Recommendations, Assessment, Development, and Evaluations (GRADE). 9 However, other formal methods were also included, such as the approach used by the US Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) program. To be included, a review had to state or cite the method used and report the certainty (or strength or quality) of evidence for each conclusion. After we applied this restriction, most health conditions had only 1 systematic review meeting the eligibility criteria, and we used this review for the map. Among conditions for which we identified more than 1 review meeting the eligibility criteria, we first assessed whether reviews differed in some other feature used to classify reviews on our map (eg, different comparators or type of massage therapy), which we would label with the appropriate designation (such as vs usual care or reflexology ). If there were multiple reviews about the same condition and they did not differ in some other feature, we selected the systematic review we judged as being most informative for readers. In general, this was the most recent review or the review with the greatest number of included studies.
Data on study condition, number of articles in a review, intervention characteristics, comparators, conclusions, and certainty, quality, or strength of evidence were extracted by 1 reviewer and then verified by a second reviewer (S.M., J.A., and P.G.S.). Our evidence mapping process produced a visual depiction of the evidence for massage therapy, as well as an accompanying narrative with an ancillary figure and table.
The visual depiction or evidence map uses a bubble plot format to display information on 4 dimensions: bubble size, bubble label, x-axis, and y-axis. This allowed us to provide the following types of information about each included systematic review:
Number of articles in systematic review (bubble size): The size of each bubble corresponds to the number of relevant primary research studies included in a systematic review.
Condition (bubble label): Each bubble is labeled with the condition discussed by that systematic review.
Shapes and colors: Intervention characteristics for each condition are presented in the form of colors (type of intervention) and shapes (comparators). For type of intervention, we included nonspecified massage therapy, tuina, myofascial release, reflexology, acupressure, and auricular acupressure. For comparators, we included mixed comparators with subgroups, mixed comparators with no subgroups, sham or placebo, and active therapy or usual care. A condition can appear more than once if multiple systematic reviews included different type of massage therapy or different comparators.
Strength of findings (rows): Each condition is plotted on the map based on the ratings of certainty of evidence statement as reported in the systematic reviews: high, moderate, low, or very low.
Outcome associated with massage therapy (columns): Each condition is plotted in potential benefit or no benefit as the outcome associated with massage therapy. Columns are not mutually exclusive. A review could have more than 1 conclusion, and conclusions could differ in the benefit associated with massage therapy. Both conclusions are included on the map.
Risk of bias is not part of the method of an evidence map. We assessed the quality of included reviews using criteria developed by the U S Preventive Services Task Force (USPSTF). Certainty of evidence as determined by the original authors of the systematic review was abstracted for each conclusion in each systematic review and tabulated.
The search identified 1164 potentially relevant citations. Among 129 full-text articles screened, 41 publications were retained for further review. Of these, 24 reviews were excluded from the map for the following reasons: only 1 primary study about interventions of interest (11 studies), outcomes associated with massage therapy could not be distinguished from other included interventions (5 studies), not an intervention of interest (3 studies), not a comparison of interest (2 studies), overlap with a more recent or larger review that was already included on the map (2 studies), and self-delivered therapy (1 study). We included 17 publications in this map covering 13 health conditions. 4 , 10 - 25 The literature flowchart ( Figure 1 ) summarizes results of the study selection process, and eAppendix 2 in Supplement 1 presents citations for all excluded reviews at full-text screening.
The total number of primary studies about massage therapy for pain in the included reviews ranged from 2 studies to 23 studies. There were 12 reviews that included fewer than 10 primary studies 4 , 11 - 17 , 20 - 23 and 5 reviews that included 10 to 25 studies about massage therapy for pain. 10 , 18 , 19 , 24 , 25 Of included reviews, 3 reviews were completed by the Cochrane Collaboration 4 , 19 , 23 and 2 reviews were completed by the AHRQ EPC program. 11 , 18
We categorized the included 17 reviews by health condition. These categories were cancer-related pain, 15 , 24 back pain (including chronic back pain, 25 chronic low back pain, 18 , 22 and low back pain 17 ), chronic neck pain, 18 fibromyalgia, 21 labor pain, 4 , 19 mechanical neck pain, 13 myofascial pain, 14 palliative care needs, 10 plantar fasciitis, 12 post–breast cancer surgery pain, 16 postcesarean pain, 23 postpartum pain, 20 and postoperative pain. 11
Of 17 included reviews, 3 reviews included more than 1 type of massage therapy and 14 reviews included 1 type of massage therapy. Reviews by Chou et al 11 and Smith et al 16 included acupressure and nonspecified massage therapy as interventions. The review by Candy et al 7 included reflexology and nonspecified massage therapy as interventions. Of the 14 reviews with 1 type of massage therapy, there were 5 reviews describing nonspecified massage therapy, 10 , 14 , 17 , 20 1 review about tuina, 22 5 reviews about myofascial release, 8 , 9 , 12 , 18 , 19 and 3 reviews about acupressure. 13 , 15 , 21
A variety of comparators were included in reviews. Of 9 reviews that included more than 1 comparator in analyses, 4 , 11 , 13 , 14 , 18 - 22 2 reviews did not conduct separate analyses by comparator (labeled mixed with no subgroups ) 13 , 14 and 3 reviews conducted separate analyses by comparator (labeled mixed with subgroups ). 4 , 21 , 22 The other 4 reviews included a mix of comparators with separate conclusions: sham or placebo and active therapy or usual care, 11 mixed with no subgroups and active therapy or usual care, 18 mixed with subgroups and active therapy or usual care, 20 and mixed with no subgroups, sham, and active therapy or usual care. 19 There were 8 reviews that included 1 comparator only in their analyses, 10 , 12 , 15 - 17 , 23 - 25 with 7 reviews that described interventions compared with active therapy or usual care only, 10 , 12 , 15 , 17 , 23 - 25 while 1 review limited inclusion to primary studies with a sham or placebo comparator. 16
There was substantial variation in the reporting of other details from primary studies in included reviews. Any study that did not specify the mode of delivery was included; studies that explicitly stated that massage therapy was self-delivered were excluded. Of the 17 included reviews, 5 reviews provided details of personnel who administered the therapy, including massage therapist, nurse, aromatherapist, physiotherapist, and reflexologist. 4 , 10 , 19 - 21 A total of 7 reviews presented length of sessions (eg, 5-minute or 90-minute sessions for massage therapy studies and 30-second or 5-minute sessions for acupressure studies). 10 , 16 , 18 , 20 - 23 With the exception of the review by He et al, 15 all reviews reported details about frequency, duration, or both when available. A total of 9 reviews included information about frequency of sessions (eg, 1 session or once every 3 weeks for massage therapy studies and 4 times per day or daily for acupressure studies), 10 , 12 , 16 - 18 , 20 - 23 and 9 reviews reported duration of sessions (eg, single session or 3 months). 10 - 12 , 16 - 18 , 20 , 22 , 23 There were 7 reviews that included details about follow-up (eg, 1 week or 12 months). 10 , 13 , 17 , 18 , 21 , 23 , 25
Using USPSTF criteria to rate the quality of included reviews, 10 reviews were rated good 4 , 10 , 11 , 14 - 16 , 18 , 19 , 21 , 23 and 7 reviews were rated fair. 12 , 13 , 17 , 20 , 22 , 24 , 25 See eAppendix 3 in Supplement 1 for each review’s rating.
Figure 2 is a visual depiction of the following types of information about each included systematic review: condition, types of comparison treatments (shapes), types of massage therapy (color), number of articles included for each conclusion (bubble size), outcomes associated with massage therapy for pain (columns), and certainty of evidence rating (rows). There were 6 reviews mapped more than once, reflecting primary studies describing more than 1 health condition, 18 more than 1 type of massage therapy, 10 , 20 or outcomes associated with massage therapy compared with different comparators. 11 , 17 - 19 There were 7 conditions from reviews 14 , 16 - 19 , 21 , 22 that reported 1 conclusion rated as moderate-certainty evidence, all of which concluded that massage therapy was associated with beneficial outcomes for pain ( Table 1 ). However, most other conditions had conclusions rated as low- or very low–certainty evidence (12 reviews about 10 conditions 4 , 10 - 13 , 15 , 17 - 20 , 23 - 25 ). This rating means “Our confidence in the effect estimate is limited. The true effect may be substantially different from the estimate of effect,” or “We have very little confidence in the effect estimate.” See eAppendix 3 in Supplement 1 for conclusions in all reviews. This map included 4 conditions that did not appear in the 2018 map, 12 , 16 , 20 , 23 and there were 8 conditions in the 2018 map that did not have new reviews meeting eligibility criteria (mainly a formal grading of the certainty of evidence); 7 health conditions 10 , 11 , 13 - 15 , 17 , 18 , 21 , 22 , 24 , 25 were included in the 2018 map and the new map (see details in eAppendix 4 in Supplement 1 ).
Evidence about adverse events was collected by approximately half of included reviews, and no serious adverse events were reported. While 11 of 17 reviews 10 , 11 , 13 , 15 , 17 - 19 , 22 - 25 described adverse events, 2 reviews 18 , 23 included certainty of evidence conclusions for adverse events for 3 health conditions ( Table 2 ).
There is a large literature of original randomized clinical trials and systematic reviews of randomized clinical trials of massage therapy as a treatment for pain. Our systematic review found that despite this literature, there were only a few conditions for which authors of systematic reviews concluded that there was at least moderate-certainty evidence regarding health outcomes associated with massage therapy and pain. Most reviews reported low- or very low–certainty evidence. Although adverse events associated with massage therapy for pain were rare, the evidence was limited. For reviews that had conclusions about adverse events, authors were uncertain if there was a difference between groups or did not find a difference between groups and rated the evidence low to very low certainty of evidence.
Massage therapy is a broad term that is inclusive of many styles and techniques. We applied exclusion criteria determined a priori to help identify publications for inclusion in the evidence map. Despite that procedure, there was still a lack of clarity in determining what massage therapy is. For instance, acupressure was sometimes considered acupuncture and other times considered massage therapy, depending on author definition. In this case, we reviewed and included only publications that were explicitly labeled acupressure and did not review publications about acupuncture only. This highlights a fundamental issue with examining the evidence base of massage therapy for pain when there is ambiguity in defining what is considered massage therapy.
Unlike a pharmaceutical placebo, sham massage therapy may not be truly inactive. It is conceivable that even the light touch or touch with no clear criterion 26 used in sham massage therapy may be associated with some positive outcomes, meaning that patients who receive the massage therapy intervention and those who receive a sham massage therapy could both demonstrate some degree of symptom improvement. Limitations of sham comparators raise the question of whether sham or placebo treatment is an appropriate comparison group in massage therapy trials. It may be more informative to compare massage therapy with other treatments that are accessible and whose benefits are known so that any added beneficial outcomes associated with massage therapy could be better isolated and understood.
Compared with the 2018 map, our map included 4 new conditions not on the 2018 map, while 8 conditions from the 2018 map had no new reviews meeting eligibility criteria and 7 health conditions appeared in both maps. Despite identifying new conditions and conclusions with higher certainty of evidence in several reviews in our updated search, most included reviews reported low or very low certainty of evidence, suggesting that the most critical research need is for better evidence to increase certainty of evidence for massage therapy for pain. This is a challenge given that massage, like other complementary and integrative health interventions, does not have the historical research infrastructure that most health professions have. 27 Nevertheless, it is only when systematic reviews and meta-analyses are conducted with high-quality primary studies that the association or lack of association of massage therapy with pain will reach higher certainties of evidence. Studies comparing massage therapy with placebo or sham are probably not the priority; rather, the priority should be studies comparing massage therapy with other recommended, accepted, and active therapies for pain. Studies comparing massage therapy with other recommended therapies should also have a sufficiently long follow-up to allow any nonspecific outcomes (eg, those associated with receiving some new treatment) to dissipate. For example, this period has been proposed to be at least 6 months for studies of chronic pain.
There are 2 main limitations to this systematic review’s evidence map. The first, common to all systematic reviews, is that we may not have identified all potentially eligible evidence. If a systematic review was published in a journal not indexed in any of 5 databases we searched and we did not identify it as part of our search of references of included publications, then we would have missed it. Nevertheless, our search strategy identified more than 200 publications about massage therapy for pain published since July 2018, so we did not lack potential reviews to evaluate. The second limitation of evidence maps is that we did not independently evaluate the source evidence; in other words, we took conclusions of authors of the systematic review at face value. That is the nature of an evidence map. Particular to this application of the mapping process, we mapped the review we deemed most informative for the 2 health conditions that had more than 1 eligible review (back pain and labor pain). This necessarily requires judgment, and others could disagree with that judgment. We included the citation for reviews excluded from the map for this overlap reason in supplemental material, and interested readers can review it for additional information. As in all evidence-based products and particularly in 1 such as this covering a large and complex evidence base, it is possible that there are errors of data extraction and compilation. We used dual review to minimize the chance of such errors, but if we are notified of errors, we will correct them.
Although this systematic review found that the number of conclusions about the effectiveness of massage therapy that were judged to have at least moderate certainty of evidence was greater now than in 2018, it was still small relative to the need. More high-quality randomized clinical trials are needed to provide a stronger evidence base to assess the effect of massage therapy on pain. For painful conditions that do not have at least moderate-certainty evidence supporting use of massage therapy, new studies that address limitations of existing research are needed. The field of massage therapy would be best advanced by educating the wider research community with clearer definitions of massage therapy and whether it is appropriate to include multiple modalities in the same systematic review.
Accepted for Publication: May 15, 2024.
Published: July 15, 2024. doi:10.1001/jamanetworkopen.2024.22259
Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Mak S et al. JAMA Network Open .
Corresponding Author: Selene Mak, PhD, MPH, Veterans Health Administration, Greater Los Angeles Healthcare System, 11301 Wilshire Blvd, Los Angeles, CA 90073 ( [email protected] ).
Author Contributions: Drs Mak and Shekelle had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Mak, Miake-Lye, Shekelle.
Acquisition, analysis, or interpretation of data: Mak, Allen, Begashaw, Beroes-Severin, De Vries, Lawson, Shekelle.
Drafting of the manuscript: Mak, Allen, Begashaw, Beroes-Severin, De Vries, Lawson, Shekelle.
Critical review of the manuscript for important intellectual content: Mak, Miake-Lye, Shekelle.
Statistical analysis: Allen.
Obtained funding: Shekelle.
Administrative, technical, or material support: Begashaw, Miake-Lye, Beroes-Severin, De Vries, Lawson.
Supervision: Mak, Shekelle.
Conflict of Interest Disclosures: None reported.
Funding/Support: Funding was provided by the Department of Veterans Affairs Health Services Research and Development.
Role of the Funder/Sponsor: The funders had no role in the collection, management, analysis, and interpretation of the data and preparation of the manuscript. The funders participated in the design and conduct of the study, the review and approval of the manuscript, and the decision to submit the manuscript for publication.
Data Sharing Statement: See Supplement 2 .
Error: User registration is currently not allowed.
Username or Email Address
Google Authenticator code
Remember Me
Lost your password?
← Go to BMJ
Join the community, edit social preview.
Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.
TASK | DATASET | MODEL | METRIC NAME | METRIC VALUE | GLOBAL RANK | REMOVE |
---|
Add a method, remove a method, edit datasets, systematic literature review of ai-enabled spectrum management in 6g and future networks.
12 Jun 2024 · Bushra Sabir , Shuiqiao Yang , David Nguyen , Nan Wu , Alsharif Abuadbba , Hajime Suzuki , Shangqi Lai , Wei Ni , Ding Ming , Surya Nepal · Edit social preview
Artificial Intelligence (AI) has advanced significantly in various domains like healthcare, finance, and cybersecurity, with successes such as DeepMind's medical imaging and Tesla's autonomous vehicles. As telecommunications transition from 5G to 6G, integrating AI is crucial for complex demands like data processing, network optimization, and security. Despite ongoing research, there's a gap in consolidating AI-enabled Spectrum Management (AISM) advancements. Traditional spectrum management methods are inadequate for 6G due to its dynamic and complex demands, making AI essential for spectrum optimization, security, and network efficiency. This study aims to address this gap by: (i) Conducting a systematic review of AISM methodologies, focusing on learning models, data handling techniques, and performance metrics. (ii) Examining security and privacy concerns related to AI and traditional network threats within AISM contexts. Using the Systematic Literature Review (SLR) methodology, we meticulously analyzed 110 primary studies to: (a) Identify AI's utility in spectrum management. (b) Develop a taxonomy of AI approaches. (c) Classify datasets and performance metrics used. (d) Detail security and privacy threats and countermeasures. Our findings reveal challenges such as under-explored AI usage in critical AISM systems, computational resource demands, transparency issues, the need for real-world datasets, imbalances in security and privacy research, and the absence of testbeds, benchmarks, and security analysis tools. Addressing these challenges is vital for maximizing AI's potential in advancing 6G technology.
Datasets edit.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Beata smela.
a Assignity, Krakow, Poland
b Public Health Department, Aix-Marseille University, Marseille, France
Konrad gawlik, emilie clay.
c Clever-Access, Paris, France
Associated data.
The data supporting the findings of this study are available within the article and its supplementary materials.
Purpose: Nowadays, systematic literature reviews (SLRs) and meta-analyses are often placed at the top of the study hierarchy of evidence. The main objective of this paper is to evaluate the trends in SLRs of randomized controlled trials (RCTs) throughout the years.
Methods: Medline database was searched, using a highly focused search strategy. Each paper was coded according to a specific ICD-10 code; the number of RCTs included in each evaluated SLR was also retrieved. All SLRs analyzing RCTs were included. Protocols, commentaries, or errata were excluded. No restrictions were applied.
Results: A total of 7,465 titles and abstracts were analyzed, from which 6,892 were included for further analyses. There was a gradual increase in the number of annual published SLRs, with a significant increase in published articles during the last several years. Overall, the most frequently analyzed areas were diseases of the circulatory system ( n = 750) and endocrine, nutritional, and metabolic diseases ( n = 734). The majority of SLRs included between 11 and 50 RCTs each.
Conclusions: The recognition of SLRs’ usefulness is growing at an increasing speed, which is reflected by the growing number of published studies. The most frequently evaluated diseases are in alignment with leading causes of death and disability worldwide.
Presenting background information about a subject or documenting the growth of knowledge over time can be achieved with narrative reviews of the literature. However, they tend to be subjective as they rely on the author’s expertise on discussed topic, and offer a condensed presentation of a subject rather than an extensive one. Furthermore, they are frequently based on articles chosen selectively from the available material, which puts them at risk for systematic bias [ 1 ]. Typically, narrative reviews don’t describe how the review process was carried out [ 2 ]. As a result, they usually do not provide a thorough foundation for theory development and testing [ 3 ]. In 1979, British epidemiologist, Archie Cochrane wrote: ‘It is surely a great criticism of our profession that we have not organised a critical summary, by speciality or subspecialty, updated periodically, of all relevant randomised controlled trials’ [ 4 ]. That is why researchers in the field of healthcare have been working on a program of systematic reviews on the efficacy of therapies starting in the 1980s. In order to collect, assess, and promote research information, the Cochrane Collaboration was established in 1993. Since then, an extensive set of guidelines for conducting systematic reviews has been produced [ 5 ]. Other organizations have also joined this effort to convert the knowledge gained by health experts into practice, the main aim being to assist evidence-based medicine (EBM) practitioners in decision-making [ 6 ]. Nowadays, systematic literature reviews (SLRs) and meta-analyses are often placed at the top of the evidence hierarchy, usually depicted as a pyramid, ordered by the design and risk of bias of included studies [ 7 ]. In contrast to narrative reviews, systematic reviews address a specific research question [ 8 ]. This includes collecting all primary research applicable to the established review question and critically evaluating and synthesizing the data [ 9 ]. There are a few stages of conducting an SLR. Defining the review question, establishing hypotheses, and coming up with a review title are all part of the first stage. Titles should ideally be succinct and descriptive, e.g. intervention for the population with a given condition. One should always a priori define inclusion and exclusion criteria (according to PICO: P – population, I – intervention, C – comparison, O – outcomes), and study type (i.e RCTs). The development of a search strategy is another key step in performing a good quality SLR. Searching typically involves using several electronic databases (such as MEDLINE, EMBASE, or Cochrane CENTRAL), but they can also include consulting article reference lists, manually scanning important journals (hand-searching), or speaking directly with experts and scholars [ 10 ]. Once all abstracts are found, the following step is their screening – the process of identifying articles for inclusion and removing duplicates [ 8 ]. Then, appropriate full-text articles are gathered. Data from selected studies are then extracted. Data analysis should be carried out after quality assessment. Alternatively, some of these methods may be streamlined or omitted to produce evidence in a resource-efficient manner, in a form of rapid review, which is less comprehensive than a traditional SLR [ 11 ]. The first phase of this procedure includes a straightforward descriptive review of each study, usually referred to as qualitative analysis. If it is possible to combine results from different studies, the second phase – quantitative analysis, or meta-analysis – can be performed [ 12 ]. If used appropriately, meta-analysis will increase the accuracy of estimates of treatment outcomes, reducing the likelihood of false positive or negative findings and possibly allowing for the earlier implementation of successful therapies [ 13 ]. The number of SLRs seems to be exploding over the years while no quantification of this phenomena has been described to the authors knowledge. The main objective of this paper was to evaluate the volume trends in SLRs of RCTs throughout the years.
To analyse the overall increase of the number of SLRs over the years, the broad search in PubMed was performed in May 2023, using the following search string: ‘randomised controlled’ OR ‘randomised clinical’ OR ‘randomized controlled’ OR ‘randomized clinical’ OR RCT* . Later, an appropriate filter was applied in order to retrieve only studies with an SLR design. The total numbers of retrieved studies stratified by publication years were exported into an Excel file.
To run a detailed analysis of the trends in RCT SLRs over the years, a rapid review was conducted in Medline database on a smaller, representative sample of references, and results were compared to the ones from PubMed, to analyse consistency between them, and check if the same trend in the number of published SLRs would be observed. Medline database was searched via Ovid in May 2023 using a highly focused search strategy: (systematic review* or systematic literature review*).ti AND randomi?ed controlled trial×.ti . The search results were then imported to EndNote 20 (Clarivate) program and analysed using the Eppi-Reviewer Web software [ 14 , 15 ]. A single screening of titles and abstracts was performed. Additionally, based on information provided in titles and abstracts, each paper was coded according to a specific ICD-10 code (depending on the analysed disease area or procedure, Table 1 ) or, if no ICD-10 code was applicable (for example, the SLR analysed healthy subjects), to ‘Treatments’ code, consisting of ‘Pain treatment’, ‘Anaesthesia’, ‘Supplements/diet’ and ‘Other treatments/interventions’ subcategories. When appropriate, two or more codes were selected. Data regarding the number of RCTs included in each analysed SLR was also retrieved and divided into six categories: 1–10, 11–50, 51–100, 101–200, >200, and not reported (in abstract/title, NR).
Disease area according to ICD-10 classification.
Code | Title | Code | Title |
---|---|---|---|
A00–B99 | Certain infectious and parasitic diseases | L00–L99 | Diseases of the skin and subcutaneous tissue |
C00–D48 | Neoplasms | M00–M99 | Diseases of the musculoskeletal system and connective tissue |
D50–D89 | Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism | N00–N99 | Diseases of the genitourinary system |
E00–E90 | Endocrine, nutritional and metabolic diseases | O00–O99 | Pregnancy, childbirth and the puerperium |
F00–F99 | Mental and behavioral disorders | P00–P96 | Certain conditions originating in the perinatal period |
G00–G99 | Diseases of the nervous system | Q00–Q99 | Congenital malformations, deformations and chromosomal abnormalities |
H00–H59 | Diseases of the eye and adnexa | R00–R99 | Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified |
H60–H95 | Diseases of the ear and mastoid process | S00–T98 | Injury, poisoning and certain other consequences of external causes |
I00–I99 | Diseases of the circulatory system | V01–Y98 | External causes of morbidity and mortality |
J00–J99 | Diseases of the respiratory system | Z00–Z99 | Factors influencing health status and contact with health services |
K00–K93 | Diseases of the digestive system | U00–U99 | Codes for special purposes |
All SLRs analysing RCTs were included, without any restrictions on population, interventions, or outcomes. Protocols, commentaries, or errata were excluded. No restrictions on date or language were applied.
The PubMed search for the SLR of RCTs yielded 86,765 results ( Figure 1 ). The first identified article was issued in 1990 and compared the effects of corticosteroid administration to no corticosteroid treatment before preterm delivery based on data from 12 RCTs [ 16 ]. Later, another 10 articles were published in 1994, and since then, new articles were being released annually. The number of newly published SLRs gradually increased with each year: 100 articles were published in 1999, and 1000 in 2005. In recent years, a couple of thousands of new RCT SLRs were being published annually; what is more, 37% of all identified records were published since the year 2020 ( n = 32,174). We observe an exponential growth of published RCT SLRs. The largest number of new records was observed in the year 2022, with more than 10,000 publications released. As presented in Figure 2 , the number of published RCTs was also growing; however, it reached a maximum in 2014 and has been staying on a similar level nowadays.
The number of RCT SLRs published over the years. Source: PubMed (search run in May 2023).
The number of RCTs published over the years. Source: PubMed (search run in May 2023).
The highly targeted search conducted in Medline yielded 7,534 records. After deduplication, 7,465 titles and abstracts were analysed, from which 6,892 were included for further analyses ( Figure 3 ). The oldest retrieved publications date back to 1994. The gradual increase in the number of published RCT SLRs was consistent with the one observed in the PubMed analysis: more than 200 articles were released in 2013, and more than 600 in 2019; there was an intensification of the increase during the last several years: from over 800 publications in 2020 to over 1,400 published in 2022. Therefore, it was assumed that the identified records provided a representative sample for further analysis.
Distribution of RCT SLRs over the years (search run in May 2023).
The distribution of all identified RCT SLRs by the evaluated disease area and the number of included RCTs is presented in Figure 4 . Overall, the most frequently analysed area in the identified articles were diseases of the circulatory system (I00-I99, n = 750; 10.9% of included articles), such as heart failure or stroke, closely followed by endocrine, nutritional, and metabolic diseases (E00-E90, n = 734; 10.7%), mainly diabetes. Additionally, 7.8% of SLRs were focused on assessing the impact of various supplementations or diets ( n = 535). The relatively low number of SLRs assessing the following disease areas were identified: congenital malformations, deformations and chromosomal abnormalities (Q00-Q99, n = 10), diseases of the ear and mastoid process (H60-H95, n = 21) and external causes of morbidity and mortality (V01-Y98, n = 26).
Number of RCT SLRs stratified by disease area and by the number of included RCTs.
It was shown that the majority of SLRs summarised data from a moderate number of RCTs (between 11 and 50, n = 3,194; 46.4%); furthermore, one-third of analysed reviews included a lower number of studies (between 1 and 10). Larger SLRs, including more than 51 trials, were fewer in number: 5.2% of them included between 51 and 100 trials ( n = 361), and 1.9% – between 101 and 200 ( n = 131); only 87 out of 6,892 identified reviews analysed more than 200 trials. Furthermore, in 6.9% of articles, the number of incorporated trials was not reported in the abstract ( n = 474). This proportion was consistent throughout the years and in various disease areas.
Figure 5 depicts the distribution of studies according to disease area in three distinct periods: from 1994 to 2015 (A), from 2016 to 2019 (B) and from 2020 onward (C).
The number of RCT SLRs stratified by disease area, published in 3 distinct periods: from 1994 to 2015, from 2016 to 2019 and from 2020 to March of 2023.
A total of 1,624 identified RCT SLRs were published between 1994 and 2015. Diseases affecting the circulatory system (I00-I99; n = 176; 10.8%) and the musculoskeletal system and connective tissue (M00-M99, n = 138; 8.5%) were the main areas of focus. With 7.5% and 7.4% of studies, respectively, neoplasms (C00-D48, n = 121) and mental and behavioural disorders (F00-F99, n = 120) were also among the most commonly studied topics. Interestingly, in highest number of cases, studies could not be assigned to any specific ICD-10 code ( n = 223; 13.7%).
In the publishing period between 2016 and 2019, 1,880 RCT SLRs were identified. One significant change from the 1994–2015 period is that reviews on endocrine, nutritional, and metabolic diseases (E00-E90, n = 232; 12.3%) outnumbered the SLRs focused on circulatory system diseases (I00-I99, n = 212; 11.3%). Additionally, the number of studies analysing the impact of supplementations or diets increased twofold.
Half of all analysed RCT SLRs were published in recent years (2020 to March 2023; n = 3,416). Similar to the previous interval, the most frequently analysed disease area was endocrine, nutritional, and metabolic diseases ( n = 394; 11.5%), closely followed by the diseases of the circulatory system ( n = 363, 10.6%) and of the musculoskeletal system and connective tissue ( n = 305; 8.9%). The number of newly published studies analysing the impact of supplementations or diets further increased by twofold in comparison to the 2016–2019 period. In the recent years, threefold increase in number of trials concerning nervous system diseases (G00-G99) such as Alzheimer’s disease was observed. The first appearance of codes for special purposes (U00-U99; n = 80), related to the COVID-19 pandemic, is also worth noting.
In recent years, evidence synthesis became more crucial than individual studies. It helps with comparing similar studies, combining their findings, and making evidence more accessible, as well as with the identification of the most cost-effective treatments and future research to be better designed. Back in 1995, the Cochrane Group released the Cochrane Database of Systematic Reviews (CDSR) which consisted of 50 reviews [ 17 , 18 ]. In comparison, the number of reviews in 2015 was above 6,000. In 1998, CDSR was made accessible on the internet. Out of 2,500 reviews released each year, 20% are written by Cochrane [ 17 ]. Overall, in 2010, approximately 75 trials and 11 systematic literature reviews were published every day [ 19 ]. However, the number of issued SLRs did not exceed the number of narrative, non-systematic reviews, which growth is even higher. There are also much more journals publishing them [ 19 ]. Furthermore, SLRs and trials are lower in the number of published works than case reports [ 19 ]. Interestingly, data shows that 95% of all articles and 98% of core clinical journals were produced by just 30 nations globally. By 2018, there was an increase in all publication types; however, the most significant increase could be noticed in terms of publications of meta-analyses from China, which was leading the chart ( n = 4,659); the United States of America was leading in the case of systematic reviews ( n = 3,654), clinical trials ( n = 11,095) and RCTs ( n = 7,953) [ 20 ].
With so many SLRs published nowadays, it is crucial to use trustworthy data. The quality of trials was defined in the literature as ‘the likelihood of the trial design to generate unbiased results’ [ 21 ]. The quality of individual included studies is affecting the quality of the entire SLR; therefore, a proper bias assessment is a crucial step and key component. This is especially relevant if the evidence of medical treatment effectiveness is inconclusive. There are many tools that help with performing a quality assessment of RCTs such as Jadad scale [ 8 ], or Risk of Bias tool for randomized trials 2.0 (RoB 2.0), which is the suggested method for evaluating bias of studies that are part of Cochrane Reviews [ 22 ]. Its structure is made out of five domains: randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results [ 22 ]. Just like in the case of RCTs, quality of SLRs varies. A Measurement Tool To Assess Systematic Reviews (AMSTAR) was published in 2007 to make it possible for health professionals and policymakers to quickly evaluate the quality of SLRs of interventional RCTs. However, due to some criticisms, such as being focused mainly on RCTs and considering articles written in languages other than English as ‘grey literature’, the second, current version was developed in 2017. AMSTAR-2 takes additionally non-RCTs into account for assessment with the goal to determine if the most crucial information is reported in SLRs [ 23–25 ]. CASP Systematic Review Checklist is also commonly used instrument recommended by World Health Organization and Cochrane as an approachable alternative for novice qualitative researchers [ 26 ]. It consists of ten questions, divided into three sections that help to determine if the results of the study are valid (Section A), what are those results (Section B) and if the results will help locally (Section C) [ 27 ]. Lastly, while not a quality assessment instrument, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) could be used to improve the reporting of SLRs and meta-analyses; it is also a useful tool for critical appraisal of published SLRs [ 28 ].
COVID-19 pandemic shook the world in the beginning of 2020. The need for more information about this disease was high as there were many uncertainties. Already in April 2020, quick search yielded 6831 articles from a total of 1430 journals about COVID-19 [ 29 ]. The most frequent study designs were review articles ( n = 202) and SLRs ( n = 43). An average of almost 59 articles were published everyday [ 29 ]. The so-called ‘covidization’ emerged – until August of 2021, in general and internal medicine publications, COVID-19 received up to 79.3% of citations and was mentioned in 98 of the top 100 most-cited articles [ 30 ]. That trend is also noticeable in the number of SLRs containing keywords for COVID-19 published in PubMed since 2020 ( Figure 6 , search run in May 2023 - Appendix ). However, according to a systematic analysis of SLRs on COVID-19 published in 2021, methodological quality of the reviews was poor: out of 243 assessed with AMSTAR-2, 12.3% had moderate quality, 25.9% had low quality, and 61.7% had critically low quality [ 31 ]. Those conclusions were confirmed by other authors. Abbott et al. conducted an analysis of early published COVID-19 SLRs and found that 88 out of 280 reviews being assessed met SLR criteria, and only 3 of them had moderate or high quality according to AMSTAR-2. Fifty-two of those SLRs have been completed within 3 weeks, and submission and publication process took 3 weeks in 50% of cases. Publications received high attention despite being of low quality [ 32 ]. It shows that studies reported as SLRs should not be considered as of high quality from the beginning; each of reader should analyze the methodology undertaken and consider its impact on the findings.
The number of SLRs on COVID-19 published since 2020. Source: PubMed (search run in May 2023).
The number of publications is increasing not only for RCT SLRs. Similar tendencies may be observed for different study designs. According to the performed analysis of PubMed data, the overall trend in publishing the SLRs of epidemiological studies is similar to the SLRs on RCTs: the quantity of publications from both study designs consistently grows, although RCT SLRs are more numerous (in 2022 RCTs: n = 10,061; epidemiological: n = 8,947) ( Figure 7 , search run in May 2023 - Appendix ). Development of epidemiological studies may be a result of a recent interest in real-world evidence (RWE) and its increasing role in health-care decisions. Increasing digitisation of health records facilitating data analysis, as well as increasing focus on the importance of patient-reported outcomes supports this trend [ 33 ].
The number of epidemiological SLRs published over the years. Source: PubMed (search run in May 2023).
According to the Global Health Estimates published by the World Health Organization (WHO), covering the period between the year 2000 and 2019, non-communicable diseases (e.g.: chronic diseases such as heart disease, chronic respiratory disease, cancer or diabetes) made up 7 of the world’s top 10 causes of death [ 34 , 35 ]. Ischemic heart diseases were in the lead, accounting for 8.9 million deaths in 2019, while stroke was in second place, causing 11% of deaths [ 35 ]. This is consistent with the number of published RCT SLRs in the cardiovascular area and explains the interests that physicians and trialists across the world take in cardiovascular diseases. With such a high death rate caused by those illnesses it is essential to study all possible treatments or interventions that can allow to ease the suffering of many patients and hopefully extend their life expectancy and quality. In recent years, deaths from diabetes increased by 70% globally and represented the greatest percentage increase of all WHO regions [ 34 , 35 ], which was also noticeable in the SLRs’ trends. Another change from the previous years was the appearance of Alzheimer’s disease and other forms of dementia among the top 10 death causes worldwide [ 34 ], also discernible in our analysis.
The diseases in question were also impacting the quality of life and disability – heart diseases, diabetes, stroke, lung cancer and chronic obstructive pulmonary disease were collectively responsible for nearly 100 million additional healthy life-years lost in 2019 compared to the year 2000 [ 34 ]. According to both WHO [ 35 ] and the Global Burden of Disease published by The Institute for Health Metrics and Evaluation (IHME) [ 36 ], neonatal diseases were one of the leading causes of disability-adjusted life years (DALYs) in 2019; however, this was not captured in the current analysis, since there is no uniform ICD-10 code for this area ( Figure 8 ).
The burden of disease by cause, measured in DALYs. Source: Institute for Health Metrics and Evaluation (IHME): GBD results [ 36 ].
The analysis identified several disease areas with a relatively low number of SLRs: congenital malformations, deformations and chromosomal abnormalities (Q00-Q99, n = 10), diseases of the ear and mastoid process (H60-H95, n = 21) and external causes of morbidity and mortality (V01-Y98, n = 26). In terms of congenital and chromosomal diseases, it could be assumed that few RCT studies are being conducted due to the low numbers of patients, as they are often rare diseases; additionally, treatment options are usually symptomatic, with limited options to treat the underlying cause, e.g. gene therapies. Treatment for diseases occurring due to external causes also tends to be mostly symptomatic; therefore, very few RCTs focused specifically on the cause itself (such as injury or poisoning) would be available. The insufficiency in the number of SLRs of RCTs on the diseases of the ear and mastoid process is in accordance with the WHO report on hearing from 2021, which underlines that there is a significant gap in services for ear and hearing worldwide – for instance, there is an 83% gap between need for and access to hearing aid use; the authors state that the reasons could include the lack of accurate information and stigmatizing mindsets surrounding ear diseases [ 37 ].
It is also worth to mention about an increasing number of SLRs reporting on food supplements and diets. Between 2010 and 2020 over 70,000 new articles on nutraceuticals became available in PubMed. COVID-19 pandemic led to even higher interest in dietary supplements in early 2020. Consumers were looking for additional protection from disease and it resulted in 44% increase in sales during the first wave of the pandemic in the US, relative to the same period in the previous year. Supplement sales in March 2020 increased by 63% and about 40–60% versus the same period in 2019 in the UK and France, respectively [ 38 ]. In some authors’ opinion, supplement market growth trends will not continue and should normalize to pre-pandemic values during the following years [ 39 ]. According to other sources, global supplement market is expected to grow, and the main factors influencing this trend are as follows: focusing on well-being, preventive healthcare and shifting from standard pharmaceuticals to supplements and diets, and the growing geriatric population [ 40 ].
The recognition of RCT SLRs’ usefulness for providing a synthetized unbiased information has led to increased volume of SLRs. RCT SLR publications are growing at an exponential speed. The rapid increase in the number of published RCT SLRs in the last 3 years is partly driven by the emergence of the COVID-19 pandemic. While SLR is considered as the gold standard to unequivocally address evidence, in the case of COVID-19 it was source of controversies and outcome divergence between studies. The most frequently evaluated diseases through RCT SLRs are aligned with leading causes of death and disability worldwide indicated in the reports published by WHO and IHME [ 35 , 36 ]. The emergence of food supplements and diets illustrate the increase interest for such interventions that may be considered at the frontier of lifestyle and medicine. Although SLR is recognized as the most rigorous way to perform review, the number of narrative review remains more important than SLR. It is interesting to notice that epidemiological SLRs are growing fast and are about to catch up the number of RCT SLRs. The development of real-world evidence to assess interventions, the larger access to historical databases, may have played a role in the development of epidemiological studies. SLR will continue to grow as the number of RCTs and epidemiological studies will grow making the need for unbiased summary increasingly important for supporting EBM.
Covid-19 search strategy (“systematic reviews” filter applied) run on 15.05.2023 in pubmed.
(‘COVID-19’[Mesh] OR ‘SARS-CoV-2’[Mesh] OR ‘COVID-19 Vaccines’[Mesh] OR ‘COVID-19 Serological Testing’[Mesh] OR ‘COVID-19 Nucleic Acid Testing’[Mesh] OR ‘SARS-CoV-2 variants’ [Supplementary Concept] OR ‘COVID-19 drug treatment’ [Supplementary Concept] OR ‘COVID-19 serotherapy’ [Supplementary Concept] OR ‘2019-nCoV’ OR ‘2019nCoV’ OR ‘cov 2’ OR ‘COVID-19’ OR ‘sars coronavirus 2’ OR ‘sars cov 2’ OR ‘SARS-CoV-2’ OR ‘severe acute respiratory syndrome coronavirus 2’ OR ‘coronavirus 2’ OR ‘COVID 19’ OR ‘COVID-19’ OR ‘2019 ncov’ OR ‘2019nCoV’ OR ‘corona virus disease 2019’ OR ‘cov2’ OR ‘COVID-19’ OR ‘COVID19’ OR ‘nCov 2019’ OR ‘nCoV’ OR ‘new corona virus’ OR ‘new coronaviruses’ OR ‘novel corona virus’ OR ‘novel coronaviruses’ OR ‘sars coronavirus 2’ OR ‘SARS2’ OR ‘SARS-CoV-2’ OR ‘severe acute respiratory syndrome coronavirus 2’)
‘epidemiologic stud*’ OR ‘epidemiology’ OR ‘epidemiologic’ OR ‘epidemiological’ OR ‘epidemiol*’
No potential conflict of interest was reported by the author(s).
Supplemental data for this article can be accessed online at https://doi.org/10.1080/20016689.2023.2244305
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Analysing near-miss incidents in construction: a systematic literature review.
3. research methodology, 4.1. a statistical analysis of publications, 4.2. methods used to obtain information about near misses, 4.2.1. traditional methods.
4.3.1. quantitative and qualitative statistical methods, 4.3.2. analysis using artificial intelligence (ai), 4.3.3. building information modelling, 4.4. key aspects of near-miss investigations in the construction industry, 4.4.1. occupational risk assessment, 4.4.2. causes of hazards in construction, 4.4.3. time series of near misses, 4.4.4. material factors of construction processes, 4.5. a comprehensive overview of the research questions and references on near misses in the construction industry, 5. discussion, 5.1. interest of researchers in near misses in construction (question 1), 5.2. methods used to obtain near-miss information (question 2), 5.3. methods used to analyse the information and data sets (question 3), 5.4. key aspects of near-miss investigations in the construction industry (question 4), 6. conclusions.
Institutional review board statement, informed consent statement, data availability statement, conflicts of interest.
Year | Source Title | DOI/ISBN/ISSN | Reference |
---|---|---|---|
1999 | Construction Management and Economics | 10.1080/014461999371691 | [ ] |
2002 | Structural Engineer | 14665123 | [ ] |
2009 | Building a Sustainable Future—Proceedings of the 2009 Construction Research Congress | 10.1061/41020(339)4 | [ ] |
2010 | Safety Science | 10.1016/j.ssci.2010.04.009 | [ ] |
2010 | Automation in Construction | 10.1016/j.autcon.2009.11.017 | [ ] |
2010 | Safety Science | 10.1016/j.ssci.2009.06.006 | [ ] |
2012 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0000518 | [ ] |
2013 | ISARC 2013—30th International Symposium on Automation and Robotics in Construction and Mining, Held in Conjunction with the 23rd World Mining Congress | 10.22260/isarc2013/0113 | [ ] |
2014 | Proceedings of the Institution of Civil Engineers: Civil Engineering | 10.1680/cien.14.00010 | [ ] |
2014 | Safety Science | 10.1016/j.ssci.2013.12.012 | [ ] |
2014 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0000795 | [ ] |
2014 | 31st International Symposium on Automation and Robotics in Construction and Mining, ISARC 2014—Proceedings | 10.22260/isarc2014/0115 | [ ] |
2014 | Construction Research Congress 2014: Construction in a Global Network—Proceedings of the 2014 Construction Research Congress | 10.1061/9780784413517.0181 | [ ] |
2014 | Construction Research Congress 2014: Construction in a Global Network—Proceedings of the 2014 Construction Research Congress | 10.1061/9780784413517.0235 | [ ] |
2014 | Construction Research Congress 2014: Construction in a Global Network—Proceedings of the 2014 Construction Research Congress | 10.1061/9780784413517.0096 | [ ] |
2015 | Automation in Construction | 10.1016/j.autcon.2015.09.003 | [ ] |
2015 | 32nd International Symposium on Automation and Robotics in Construction and Mining: Connected to the Future, Proceedings | 10.22260/isarc2015/0062 | [ ] |
2015 | ASSE Professional Development Conference and Exposition 2015 | - | [ ] |
2015 | Congress on Computing in Civil Engineering, Proceedings | 10.1061/9780784479247.019 | [ ] |
2016 | Automation in Construction | 10.1016/j.autcon.2016.03.008 | [ ] |
2016 | Automation in Construction | 10.1016/j.autcon.2016.04.007 | [ ] |
2016 | IEEE IAS Electrical Safety Workshop | 10.1109/ESW.2016.7499701 | [ ] |
2016 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0001100 | [ ] |
2016 | Safety Science | 10.1016/j.ssci.2015.11.025 | [ ] |
2016 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0001049 | [ ] |
2016 | IEEE Transactions on Industry Applications | 10.1109/TIA.2015.2461180 | [ ] |
2017 | Safety Science | 10.1016/j.ssci.2017.06.012 | [ ] |
2017 | ENR (Engineering News-Record) | 8919526 | [ ] |
2017 | 6th CSCE-CRC International Construction Specialty Conference 2017—Held as Part of the Canadian Society for Civil Engineering Annual Conference and General Meeting 2017 | 978-151087841-9 | [ ] |
2017 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 10.1007/978-3-319-72323-5_12 | [ ] |
2017 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0001209 | [ ] |
2017 | Safety Science | 10.1016/j.ssci.2016.08.027 | [ ] |
2017 | Safety Science | 10.1016/j.ssci.2016.08.022 | [ ] |
2018 | Safety Science | 10.1016/j.ssci.2018.04.004 | [ ] |
2018 | International Journal of Construction Management | 10.1080/15623599.2017.1382067 | [ ] |
2018 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0001420 | [ ] |
2018 | Proceedings of SPIE—The International Society for Optical Engineering | 10.1117/12.2296548 | [ ] |
2019 | Automation in Construction | 10.1016/j.autcon.2019.102854 | [ ] |
2019 | Physica A: Statistical Mechanics and its Applications | 10.1016/j.physa.2019.121495 | [ ] |
2019 | Sustainability (Switzerland) | 10.3390/su11051264 | [ ] |
2019 | Computing in Civil Engineering 2019: Data, Sensing, and Analytics—Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019 | 978-078448243-8 | [ ] |
2019 | Journal of Health, Safety and Environment | 18379362 | [ ] |
2019 | Computing in Civil Engineering 2019: Data, Sensing, and Analytics—Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019 | 978-078448243-8 | [ ] |
2019 | Computing in Civil Engineering 2019: Smart Cities, Sustainability, and Resilience—Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019 | 10.1061/9780784482445.026 | [ ] |
2019 | Journal of Construction Engineering and Management | 10.1061/(ASCE)CO.1943-7862.0001582 | [ ] |
2019 | Advances in Intelligent Systems and Computing | 10.1007/978-3-030-02053-8_107 | [ ] |
2020 | Accident Analysis and Prevention | 10.1016/j.aap.2020.105496 | [ ] |
2020 | Advanced Engineering Informatics | 10.1016/j.aei.2020.101062 | [ ] |
2020 | Advanced Engineering Informatics | 10.1016/j.aei.2020.101060 | [ ] |
2020 | ARCOM 2020—Association of Researchers in Construction Management, 36th Annual Conference 2020—Proceedings | 978-099554633-2 | [ ] |
2020 | International Journal of Building Pathology and Adaptation | 10.1108/IJBPA-03-2020-0018 | [ ] |
2020 | Communications in Computer and Information Science | 10.1007/978-3-030-42852-5_8 | [ ] |
2021 | Journal of Architectural Engineering | 10.1061/(ASCE)AE.1943-5568.0000501 | [ ] |
2021 | Safety Science | 10.1016/j.ssci.2021.105368 | [ ] |
2021 | ACM International Conference Proceeding Series | 10.1145/3482632.3487473 | [ ] |
2021 | Reliability Engineering and System Safety | 10.1016/j.ress.2021.107687 | [ ] |
2021 | Proceedings of the 37th Annual ARCOM Conference, ARCOM 2021 | - | [ ] |
2022 | Buildings | 10.3390/buildings12111855 | [ ] |
2022 | Safety Science | 10.1016/j.ssci.2022.105704 | [ ] |
2022 | Sensors | 10.3390/s22093482 | [ ] |
2022 | Proceedings of International Structural Engineering and Construction | 10.14455/ISEC.2022.9(2).CSA-03 | [ ] |
2022 | Journal of Information Technology in Construction | 10.36680/j.itcon.2022.045 | [ ] |
2022 | Forensic Engineering 2022: Elevating Forensic Engineering—Selected Papers from the 9th Congress on Forensic Engineering | 10.1061/9780784484555.005 | [ ] |
2022 | Computational Intelligence and Neuroscience | 10.1155/2022/4851615 | [ ] |
2022 | International Journal of Construction Management | 10.1080/15623599.2020.1839704 | [ ] |
2023 | Journal of Construction Engineering and Management | 10.1061/JCEMD4.COENG-13979 | [ ] |
2023 | Heliyon | 10.1016/j.heliyon.2023.e21607 | [ ] |
2023 | Accident Analysis and Prevention | 10.1016/j.aap.2023.107224 | [ ] |
2023 | Safety | 10.3390/safety9030047 | [ ] |
2023 | Engineering, Construction and Architectural Management | 10.1108/ECAM-09-2021-0797 | [ ] |
2023 | Advanced Engineering Informatics | 10.1016/j.aei.2023.101929 | [ ] |
2023 | Engineering, Construction and Architectural Management | 10.1108/ECAM-05-2023-0458 | [ ] |
2023 | Intelligent Automation and Soft Computing | 10.32604/iasc.2023.031359 | [ ] |
2023 | International Journal of Construction Management | 10.1080/15623599.2020.1847405 | [ ] |
2024 | Heliyon | 10.1016/j.heliyon.2024.e26410 | [ ] |
Click here to enlarge figure
No. | Name of Institution/Organization | Definition |
---|---|---|
1 | Occupational Safety and Health Administration (OSHA) [ ] | “A near-miss is a potential hazard or incident in which no property was damaged and no personal injury was sustained, but where, given a slight shift in time or position, damage or injury easily could have occurred. Near misses also may be referred to as close calls, near accidents, or injury-free events.” |
2 | International Labour Organization (ILO) [ ] | “An event, not necessarily defined under national laws and regulations, that could have caused harm to persons at work or to the public, e.g., a brick that falls off scaffolding but does not hit anyone” |
3 | American National Safety Council (NSC) [ ] | “A Near Miss is an unplanned event that did not result in injury, illness, or damage—but had the potential to do so” |
4 | PN-ISO 45001:2018-06 [ ] | A near-miss incident is described as an event that does not result in injury or health issues. |
5 | PN-N-18001:2004 [ ] | A near-miss incident is an accident event without injury. |
6 | World Health Organization (WHO) [ ] | Near misses have been defined as a serious error that has the potential to cause harm but are not due to chance or interception. |
7 | International Atomic Energy Agency (IAEA) [ ] | Near misses have been defined as potentially significant events that could have consequences but did not due to the conditions at the time. |
No. | Journal | Number of Publications |
---|---|---|
1 | Safety Science | 10 |
2 | Journal of Construction Engineering and Management | 8 |
3 | Automation in Construction | 5 |
4 | Advanced Engineering Informatics | 3 |
5 | Construction Research Congress 2014 Construction in a Global Network Proceedings of the 2014 Construction Research Congress | 3 |
6 | International Journal of Construction Management | 3 |
7 | Accident Analysis and Prevention | 2 |
8 | Computing in Civil Engineering 2019 Data Sensing and Analytics Selected Papers From The ASCE International Conference | 2 |
9 | Engineering Construction and Architectural Management | 2 |
10 | Heliyon | 2 |
Cluster Number | Colour | Basic Keywords |
---|---|---|
1 | blue | construction, construction sites, decision making, machine learning, near misses, neural networks, project management, safety, workers |
2 | green | building industry, construction industry, construction projects, construction work, human, near miss, near misses, occupational accident, occupational safety, safety, management, safety performance |
3 | red | accident prevention, construction equipment, construction, safety, construction workers, hazards, human resource management, leading indicators, machinery, occupational risks, risk management, safety engineering |
4 | yellow | accidents, risk assessment, civil engineering, near miss, surveys |
Number of Question | Question | References |
---|---|---|
Q | Are near misses in the construction industry studied scientifically? | [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] |
Q | What methods have been used to obtain information on near misses and systems for recording incidents in construction companies? | [ , , , , , , , , , , , , , , , , , , , , ] |
Q | What methods have been used to analyse the information and figures that have been obtained? | [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] |
Q | What are the key aspects of near misses in the construction industry that have been of interest to the researchers? | [ , , , , , , , , , , , , ] |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Woźniak, Z.; Hoła, B. Analysing Near-Miss Incidents in Construction: A Systematic Literature Review. Appl. Sci. 2024 , 14 , 7260. https://doi.org/10.3390/app14167260
Woźniak Z, Hoła B. Analysing Near-Miss Incidents in Construction: A Systematic Literature Review. Applied Sciences . 2024; 14(16):7260. https://doi.org/10.3390/app14167260
Woźniak, Zuzanna, and Bożena Hoła. 2024. "Analysing Near-Miss Incidents in Construction: A Systematic Literature Review" Applied Sciences 14, no. 16: 7260. https://doi.org/10.3390/app14167260
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
COMMENTS
The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.
Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...
Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure .An SLR updates the reader with current literature about a subject .The goal is to review critical points of current knowledge on a topic about research ...
It is usually the initial figure presented in the results section of your systematic review . ... 2016. How to do a systematic literature review in nursing: a step-by-step guide. [Google Scholar] 9. Utilization of the PICO framework to improve searching PubMed for clinical questions. Schardt C, Adams MB, Owens T, Keitz S, Fontelo P. BMC Med ...
The rationale for systematic literature reviews has been well established in some fields such as medicine for decades (e.g. Mulrow, 1994); however, there are still few methodological guidelines available in the management sciences on how to assemble and structure such reviews (for exceptions, see Denyer and Tranfield, 2009; Tranfield et al., 2003 and related publications).
The first stage in conducting a systematic. review is to develop a protocol that clearly defines: 1) the aims. and objectives of the review; 2) the inclusion and exclusion. criteria for studies ...
Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.
systematic reviews is increasing exponentially (Figure 1, see also (Bastian et al., 2010)). However, there are also methodological and practical challenges to systematic reviews. First, the
Systematic reviews serve different purposes and use a different methodology than other types of evidence synthesis that include narrative reviews, scoping reviews, and overviews of reviews. Systematic reviews can address questions regarding effects of interventions or exposures, diagnostic properties of tests, and prevalence or prognosis of ...
1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...
Here you can access information about the PRISMA reporting guidelines, which are designed to help authors transparently report why their systematic review was done, what methods they used, and what they found. The main PRISMA reporting guideline (the PRISMA 2020 statement) primarily provides guidance for the reporting of systematic reviews ...
The highest certainty rating is a body of evidence when there are no concerns in any of the GRADE factors listed in Figure 14.2.a. Review authors often downgrade evidence to moderate, ... Systematic Reviews in Health Care: Meta-analysis in Context. 2nd ed. London (UK): ...
Report. 4. ii. Example for a Systematic Literature Review: In references 5 example for paper that use Systematic Literature Review (SlR) example: ( Event-Driven Process Chain for Modeling and ...
Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research.1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis ...
A systematic review can consider only quantitative studies (i.e., meta-analysis), or just qualitative studies (i.e., meta-ethnography; Mays et al., Citation 2005). All in all, SLR combines the Literature Review core feature, the use of scientific sources, with the structured, unbiased, and evidence-based Systematic Review (see, Figure 2). It is ...
Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...
Purpose: Nowadays, systematic literature reviews (SLRs) and meta-analyses are often placed at the top of the study hierarchy of evidence. The main objective of this paper is to evaluate the trends in SLRs of randomized controlled trials (RCTs) throughout the years. ... Figure 5 depicts the distribution of studies according to disease area in ...
Level of evidence (e.g., Figure 2.2) + Quality of evidence = Strength of evidence. Thus, in coming to a conclusion about the quality of the evidence, it is insufficient to simply 'level' the evidence using an LOE scale-it must also be appraised" (p. 36). ... Carrying out systematic literature reviews: An introduction. British Journal of ...
A systematic literature review of the impact of impaired self-awareness on the process of rehabilitation in acquired brain injury. ... and 4 further relevant papers were found. 17 articles were therefore included within this review. Figure 1 shows a PRISMA flow diagram (Citation 29) of the screening and selection process.
Systematic literature reviews provide an overview of the state of research on a given topic and enable an assessment of the quality of individual studies. They also allow the results of different studies to be evaluated together when these are inconsistent. ... (Figure 2a), there is a roughly funnel shaped distribution of the effect estimates ...
This systematic review maps the certainty and quality of evidence reported by systematic reviews in 2018 to 2023 of massage therapy for pain in adults. ... Figure 1. Literature Flowchart. View Large Download. LMIC indicates low- and- middle-income country; SR, systematic review. Figure 2. Evidence Map.
This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group. Rapid reviews (RR) use modified systematic review methods to accelerate the review process while maintaining systematic, transparent and reproducible methods. In this paper, we address considerations for RR searches. We cover the main areas relevant to the search process: preparation and ...
A general literature review starts with formulating a research question, defining the population, and conducting a systematic search in scientific databases, steps that are well-described elsewhere. 1,2,3 Once students feel confident that they have thoroughly combed through relevant databases and found the most relevant research on the topic ...
Using the Systematic Literature Review (SLR) methodology, we meticulously analyzed 110 primary studies to: (a) Identify AI's utility in spectrum management. (b) Develop a taxonomy of AI approaches. (c) Classify datasets and performance metrics used. (d) Detail security and privacy threats and countermeasures.
This systematic literature review delves into the extensive landscape of emotion recognition, sentiment analysis, and affective computing, analyzing 609 articles. Exploring the intricate relationships among these research domains, and leveraging data from four well-established sources—IEEE, Science Direct, Springer, and MDPI—this systematic review classifies studies in four modalities ...
This systematic literature review (SLR) examines the integration of circular economy (CE) principles into the agri-food supply chain over the past 20 years. The review aims to consolidate existing knowledge, identify research gaps, and provide actionable insights for future research. A comprehensive search across major databases yielded 1200 articles, which were screened, filtered, and ...
Traditional Literature Review (TLR) has been stated to be a retrospective account of previous research on certain topic (Li & Wang, 2018). Meanwhile, Systematic Literature Review (SLR) has been stated as a means of evaluating and interpreting all available research significant to a singular research question, topic area, or phenomenon of ...
Purpose: Nowadays, systematic literature reviews (SLRs) and meta-analyses are often placed at the top of the study hierarchy of evidence. The main objective of this paper is to evaluate the trends in SLRs of randomized controlled trials (RCTs) throughout the years. ... Figure 5 depicts the distribution of studies according to disease area in ...
A systematic literature review of the impact of impaired self-awareness on the process of rehabilitation in acquired brain injury. ... All figure content in this area was uploaded by Pete Fleming.
The construction sector is notorious for its high rate of fatalities globally. Previous research has established that near-miss incidents act as precursors to accidents. This study aims to identify research gaps in the literature on near-miss events in construction and to define potential directions for future research. The Scopus database serves as the knowledge source for this study. To ...