Charles Sturt University

Literature Review: Systematic literature reviews

  • Traditional or narrative literature reviews
  • Scoping Reviews
  • Systematic literature reviews
  • Annotated bibliography
  • Keeping up to date with literature
  • Finding a thesis
  • Evaluating sources and critical appraisal of literature
  • Managing and analysing your literature
  • Further reading and resources

Systematic reviews

Systematic and systematic-like reviews

Charles Sturt University library has produced a comprehensive guide for Systematic and systematic-like literature reviews. A comprehensive systematic literature review can often take a team of people up to a year to complete. This guide provides an overview of the steps required for systematic reviews:

  • Identify your research question
  • Develop your protocol
  • Conduct systematic searches (including the search strategy, text mining, choosing databases, documenting and reviewing
  • Critical appraisal
  • Data extraction and synthesis
  • Writing and publishing .
  • Systematic and systematic-like reviews Library Resource Guide

Systematic literature review

A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (Dewey, A. & Drahota, A. 2016). The systematic review should follow a clearly defined protocol or plan where the criteria is clearly stated before the review is conducted. It is a comprehensive, transparent search conducted over multiple databases and grey literature that can be replicated and reproduced by other researchers. It involves planning a well thought out search strategy which has a specific focus or answers a defined question. The review identifies the type of information searched, critiqued and reported within known timeframes. The search terms, search strategies (including database names, platforms, dates of search) and limits all need to be included in the review.

Pittway (2008) outlines seven key principles behind systematic literature reviews

  • Transparency
  • Integration
  • Accessibility

Systematic literature reviews originated in medicine and are linked to evidence based practice. According to Grant & Booth (p 91, 2009) "the expansion in evidence-based practice has lead to an increasing variety of review types". They compare and contrast 14 review types, listing the strengths and weaknesses of each review. 

Tranfield et al (2003) discusses the origins of the evidence-based approach to undertaking a literature review and its application to other disciplines including management and science.

References and additional resources

Dewey, A. & Drahota, A. (2016) Introduction to systematic reviews: online learning module Cochrane Training   https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews

Gough, David A., David Gough, Sandy Oliver, and James Thomas. An Introduction to Systematic Reviews. Systematic Reviews. London: SAGE, 2012.

Grant, M. J. & Booth, A. (2009) A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal 26(2), 91-108

Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x 

Pittway, L. (2008) Systematic literature reviews. In Thorpe, R. & Holt, R. The SAGE dictionary of qualitative management research. SAGE Publications Ltd doi:10.4135/9780857020109

Tranfield, D., Denyer, D & Smart, P. (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review . British Journal of Management 14 (3), 207-222

Evidence based practice - an introduction : Literature reviews/systematic reviews

Evidence based practice - an introduction is a library guide produced at CSU Library for undergraduates. The information contained in the guide is also relevant for post graduate study and will help you to understand the types of research and levels of evidence required to conduct evidence based research.

  • Evidence based practice an introduction
  • << Previous: Scoping Reviews
  • Next: Annotated bibliography >>
  • Last Updated: Aug 11, 2024 4:07 PM
  • URL: https://libguides.csu.edu.au/review

Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.

How to Conduct a Literature Review: A Guide for Graduate Students

  • Let's Get Started!
  • Traditional or Narrative Reviews
  • Systematic Reviews
  • Typology of Reviews
  • Literature Review Resources
  • Developing a Search Strategy
  • What Literature to Search
  • Where to Search: Indexes and Databases
  • Finding articles: Libkey Nomad
  • Finding Dissertations and Theses
  • Extending Your Searching with Citation Chains
  • Forward Citation Chains - Cited Reference Searching
  • Keeping up with the Literature
  • Managing Your References
  • Need More Information?

Systematic Reviews Guides

  • Cornell University - Evidence Synthesis Service
  • Duke University - Systematic Reviews
  • Medical College of Wisconsin - Systematic Reviews
  • Purdue University - Systematic Reviews
  • Texas A&M - Systematic Reviews
  • University of South Florida - Systematic Reviews

Systematic Literature Reviews

A systematic literature review is a formal, structured research study that seeks to find, assess, and analyze studies on a specific question. Systematic reviews follow a  defined search plan where the criteria is clearly stated before the review is conducted. It is a comprehensive, transparent search that can be replicated and reproduced by other researchers.

Systematic reviews involve planning a well thought out search strategy with a specific focus or question. The review identifies the type of information searched, critiqued and reported within known timeframes. The search terms, search strategies (including database names, platforms, dates of search) and limits all need to be included in the review.

Pittway (2008) outlines seven key principles behind systematic literature reviews

  • Transparency
  • Integration
  • Accessiblity

Systematic literature reviews originated in medicine and are linked to evidence based practice, which is based on a combination of critical thinking and the best available evidence. According to Grant & Booth (p 91, 2009) "the expansion in evidence-based practice has lead to an increasing variety of review types". They compare and contrast 14 review types, listing the strengths and weaknesses of each review. 

Tranfield et al (2003) discusses the origins of the evidence-based approach to undertaking a literature review and its application to other disciplines including management and science.

References and additional resources

Gough, David A., David Gough, Sandy Oliver, and James Thomas. An Introduction to Systematic Reviews. Systematic Reviews. London: SAGE, 2017.

Grant, M. J. & Booth, A. (2009) A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal 26(2), 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Pittway, L. (2008) Systematic literature reviews . In Thorpe, R. & Holt, R. The SAGE dictionary of qualitative management research. London: SAGE Publications.

Sambunjak D, Cumpston M, Watts C. Module 1: Introduction to conducting systematic reviews. In: Cochrane Interactive Learning: Conducting an intervention review. Cochrane, 2017. Available from https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews .

Tranfield, D., Denyer, D & Smart, P. (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management 14 (3), 207-222.

  • Short Introduction to Evidence Synthesis

Systematic Review vs. Literature Review

pittway l. (2008) systematic literature reviews

Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

Google Slides-Guide to Systematic Reviews

  • Guide to Systematic Reviews Google slide from Texas A&M University about types of reviews and how to decide which one to do.
  • << Previous: Traditional or Narrative Reviews
  • Next: Typology of Reviews >>

The library's collections and services are available to all ISU students, faculty, and staff and Parks Library is open to the public .

  • Last Updated: Aug 12, 2024 4:07 PM
  • URL: https://instr.iastate.libguides.com/gradlitrev

Article Contents

Systematic reviews of the literature: an introduction to current methods.

  • Article contents
  • Figures & tables
  • Supplementary Data

Romina Brignardello-Petersen, Nancy Santesso, Gordon H Guyatt, Systematic reviews of the literature: an introduction to current methods, American Journal of Epidemiology , 2024;, kwae232, https://doi.org/10.1093/aje/kwae232

  • Permissions Icon Permissions

Systematic reviews are a type of evidence synthesis in which authors develop explicit eligibility criteria, collect all the available studies that meet these criteria, and summarize results using reproducible methods that minimize biases and errors. Systematic reviews serve different purposes and use a different methodology than other types of evidence synthesis that include narrative reviews, scoping reviews, and overviews of reviews. Systematic reviews can address questions regarding effects of interventions or exposures, diagnostic properties of tests, and prevalence or prognosis of diseases. All rigorous systematic reviews have common processes that include: 1) determining the question and eligibility criteria, including a priori specification of subgroup hypotheses 2) searching for evidence and selecting studies, 3) abstracting data and assessing risk of bias of the included studies, 4) summarizing the data for each outcome of interest, whenever possible using meta-analyses, and 5) assessing the certainty of the evidence and drawing conclusions. There are several tools that can guide and facilitate the systematic review process, but methodological and content expertise are always necessary.

  • narrative review
Month: Total Views:
July 2024 179
August 2024 122

Email alerts

Citing articles via, looking for your next opportunity.

  • Recommend to your Library

Affiliations

  • Online ISSN 1476-6256
  • Print ISSN 0002-9262
  • Copyright © 2024 Johns Hopkins Bloomberg School of Public Health
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

pittway l. (2008) systematic literature reviews

What is a Systematic Literature Review?

A systematic literature review (SLR) is an independent academic method that aims to identify and evaluate all relevant literature on a topic in order to derive conclusions about the question under consideration. "Systematic reviews are undertaken to clarify the state of existing research and the implications that should be drawn from this." (Feak & Swales, 2009, p. 3) An SLR can demonstrate the current state of research on a topic, while identifying gaps and areas requiring further research with regard to a given research question. A formal methodological approach is pursued in order to reduce distortions caused by an overly restrictive selection of the available literature and to increase the reliability of the literature selected (Tranfield, Denyer & Smart, 2003). A special aspect in this regard is the fact that a research objective is defined for the search itself and the criteria for determining what is to be included and excluded are defined prior to conducting the search. The search is mainly performed in electronic literature databases (such as Business Source Complete or Web of Science), but also includes manual searches (reviews of reference lists in relevant sources) and the identification of literature not yet published in order to obtain a comprehensive overview of a research topic.

An SLR protocol documents all the information gathered and the steps taken as part of an SLR in order to make the selection process transparent and reproducible. The PRISMA flow-diagram support you in making the selection process visible.

In an ideal scenario, experts from the respective research discipline, as well as experts working in the relevant field and in libraries, should be involved in setting the search terms . As a rule, the literature is selected by two or more reviewers working independently of one another. Both measures serve the purpose of increasing the objectivity of the literature selection. An SLR must, then, be more than merely a summary of a topic (Briner & Denyer, 2012). As such, it also distinguishes itself from “ordinary” surveys of the available literature. The following table shows the differences between an SLR and an “ordinary” literature review.

  • Charts of BSWL workshop (pdf, 2.88 MB)
  • Listen to the interview (mp4, 12.35 MB)

Differences to "common" literature reviews

CharacteristicSLRcommon literature overview
Independent research methodyesno
Explicit formulation of the search objectivesyesno
Identification of all publications on a topicyesno
Defined criteria for inclusion and exclusion of publicationsyesno
Description of search procedureyesno
Literature selection and information extraction by several personsyesno
Transparent quality evaluation of publicationsyesno

What are the objectives of SLRs?

  • Avoidance of research redundancies despite a growing amount of publications
  • Identification of research areas, gaps and methods
  • Input for evidence-based management, which allows to base management decisions on scientific methods and findings
  • Identification of links between different areas of researc

Process steps of an SLR

A SLR has several process steps which are defined differently in the literature (Fink 2014, p. 4; Guba 2008, Transfield et al. 2003). We distinguish the following steps which are adapted to the economics and management research area:

1. Defining research questions

Briner & Denyer (2009, p. 347ff.) have developed the CIMO scheme to establish clearly formulated and answerable research questions in the field of economic sciences:

C – CONTEXT:  Which individuals, relationships, institutional frameworks and systems are being investigated?

I – Intervention:  The effects of which event, action or activity are being investigated?

M – Mechanisms:  Which mechanisms can explain the relationship between interventions and results? Under what conditions do these mechanisms take effect?

O – Outcomes:  What are the effects of the intervention? How are the results measured? What are intended and unintended effects?

The objective of the systematic literature review is used to formulate research questions such as “How can a project team be led effectively?”. Since there are numerous interpretations and constructs for “effective”, “leadership” and “project team”, these terms must be particularized.

With the aid of the scheme, the following concrete research questions can be derived with regard to this example:

Under what conditions (C) does leadership style (I) influence the performance of project teams (O)?

Which constructs have an effect upon the influence of leadership style (I) on a project team’s performance (O)?          

Research questions do not necessarily need to follow the CIMO scheme, but they should:

  • ... be formulated in a clear, focused and comprehensible manner and be answerable;
  • ... have been determined prior to carrying out the SLR;
  • ... consist of general and specific questions.

As early as this stage, the criteria for inclusion and exclusion are also defined. The selection of the criteria must be well-grounded. This may include conceptual factors such as a geographical or temporal restrictions, congruent definitions of constructs, as well as quality criteria (journal impact factor > x).

2. Selecting databases and other research sources

The selection of sources must be described and explained in detail. The aim is to find a balance between the relevance of the sources (content-related fit) and the scope of the sources.

In the field of economic sciences, there are a number of literature databases that can be searched as part of an SLR. Some examples in this regard are:

  • Business Source Complete
  • ProQuest One Business
  • EconBiz        

Our video " Selecting the right databases " explains how to find relevant databases for your topic.

Literature databases are an important source of research for SLRs, as they can minimize distortions caused by an individual literature selection (selection bias), while offering advantages for a systematic search due to their data structure. The aim is to find all database entries on a topic and thus keep the retrieval bias low (tutorial on retrieval bias ).  Besides articles from scientific journals, it is important to inlcude working papers, conference proceedings, etc to reduce the publication bias ( tutorial on publication bias ).

Our online self-study course " Searching economic databases " explains step 2 und 3.

3. Defining search terms

Once the literature databases and other research sources have been selected, search terms are defined. For this purpose, the research topic/questions is/are divided into blocks of terms of equal ranking. This approach is called the block-building method (Guba 2008, p. 63). The so-called document-term matrix, which lists topic blocks and search terms according to a scheme, is helpful in this regard. The aim is to identify as many different synonyms as possible for the partial terms. A precisely formulated research question facilitates the identification of relevant search terms. In addition, keywords from particularly relevant articles support the formulation of search terms.

A document-term matrix for the topic “The influence of management style on the performance of project teams” is shown in this example .

Identification of headwords and keywords

When setting search terms, a distinction must be made between subject headings and keywords, both of which are described below:

  • appear in the title, abstract and/or text
  • sometimes specified by the author, but in most cases automatically generated
  • non-standardized
  • different spellings and forms (singular/plural) must be searched separately

Subject headings

  • describe the content
  • are generated by an editorial team
  • are listed in a standardized list (thesaurus)
  • may comprise various keywords
  • include different spellings
  • database-specific

Subject headings are a standardized list of words that are generated by the specialists in charge of some databases. This so-called index of subject headings (thesaurus) helps searchers find relevant articles, since the headwords indicate the content of a publication. By contrast, an ordinary keyword search does not necessarily result in a content-related fit, since the database also displays articles in which, for example, a word appears once in the abstract, even though the article’s content does not cover the topic.

Nevertheless, searches using both headwords and keywords should be conducted, since some articles may not yet have been assigned headwords, or errors may have occurred during the assignment of headwords. 

To add headwords to your search in the Business Source Complete database, please select the Thesaurus tab at the top. Here you can find headwords in a new search field and integrate them into your search query. In the search history, headwords are marked with the addition DE (descriptor).

The EconBiz database of the German National Library of Economics (ZBW – Leibniz Information Centre for Economics), which also contains German-language literature, has created its own index of subject headings with the STW Thesaurus for Economics . Headwords are integrated into the search by being used in the search query.

Since the indexes of subject headings divide terms into synonyms, generic terms and sub-aspects, they facilitate the creation of a document-term matrix. For this purpose it is advisable to specify in the document-term matrix the origin of the search terms (STW Thesaurus for Economics, Business Source Complete, etc.).

Searching in literature databases

Once the document-term matrix has been defined, the search in literature databases begins. It is recommended to enter each word of the document-term matrix individually into the database in order to obtain a good overview of the number of hits per word. Finally, all the words contained in a block of terms are linked with the Boolean operator OR and thereby a union of all the words is formed. The latter are then linked with each other using the Boolean operator AND. In doing so, each block should be added individually in order to see to what degree the number of hits decreases.

Since the search query must be set up separately for each database, tools such as  LitSonar  have been developed to enable a systematic search across different databases. LitSonar was created by  Professor Dr. Ali Sunyaev (Institute of Applied Informatics and Formal Description Methods – AIFB) at the Karlsruhe Institute of Technology.

Advanced search

Certain database-specific commands can be used to refine a search, for example, by taking variable word endings into account (*) or specifying the distance between two words, etc. Our overview shows the most important search commands for our top databases.

Additional searches in sources other than literature databases

In addition to literature databases, other sources should also be searched. Fink (2014, p. 27) lists the following reasons for this:

  • the topic is new and not yet included in indexes of subject headings;
  • search terms are not used congruently in articles because uniform definitions do not exist;
  • some studies are still in the process of being published, or have been completed, but not published.

Therefore, further search strategies are manual search, bibliographic analysis, personal contacts and academic networks (Briner & Denyer, p. 349). Manual search means that you go through the source information of relevant articles and supplement your hit list accordingly. In addition, you should conduct a targeted search for so-called gray literature, that is, literature not distributed via the book trade, such as working papers from specialist areas and conference reports. By including different types of publications, the so-called publication bias (DBWM video “Understanding publication bias” ) – that is, distortions due to exclusive use of articles from peer-reviewed journals – should be kept to a minimum.

The PRESS-Checklist can support you to check the correctness of your search terms.

4. Merging hits from different databases

In principle, large amounts of data can be easily collected, structured and sorted with data processing programs such as Excel. Another option is to use reference management programs such as EndNote, Citavi or Zotero. The Saxon State and University Library Dresden (SLUB Dresden) provides an  overview of current reference management programs  . Software for qualitative data analysis such as NVivo is equally suited for data processing. A comprehensive overview of the features of different tools that support the SLR process can be found in Bandara et al. (2015).

Our online-self study course "Managing literature with Citavi" shows you how to use the reference management software Citavi.

When conducting an SLR, you should specify for each hit the database from which it originates and the date on which the query was made. In addition, you should always indicate how many hits you have identified in the various databases or, for example, by manual search.

Exporting data from literature databases

Exporting from literature databases is very easy. In  Business Source Complete  , you must first click on the “Share” button in the hit list, then “Email a link to download exported results” at the very bottom and then select the appropriate format for the respective literature program.

Exporting data from the literature database  EconBiz  is somewhat more complex. Here you must first create a marked list and then select each hit individually and add it to the marked list. Afterwards, articles on the list can be exported.

After merging all hits from the various databases, duplicate entries (duplicates) are deleted.

5. Applying inclusion and exclusion criteria

All publications are evaluated in the literature management program applying the previously defined criteria for inclusion and exclusion. Only those sources that survive this selection process will subsequently be analyzed. The review process and inclusion criteria should be tested with a small sample and adjustments made if necessary before applying it to all articles. In the ideal case, even this selection would be carried out by more than one person, with each working independently of one another. It needs to be made clear how discrepancies between reviewers are dealt with. 

The review of the criteria for inclusion and exclusion is primarily based on the title, abstract and subject headings in the databases, as well as on the keywords provided by the authors of a publication in the first step. In a second step the whole article / source will be read.

You can create tag words for the inclusion and exclusion in your literature management tool to keep an overview.

In addition to the common literature management tools, you can also use software tools that have been developed to support SLRs. The central library of the university in Zurich has published an overview and evaluation of different tools based on a survey among researchers. --> View SLR tools

The selection process needs to be made transparent. The PRISMA flow diagram supports the visualization of the number of included / excluded studies.

Forward and backward search

Should it become apparent that the number of sources found is relatively small, or if you wish to proceed with particular thoroughness, a forward-and-backward search based on the sources found is recommendable (Webster & Watson 2002, p. xvi). A backward search means going through the bibliographies of the sources found. A forward search, by contrast, identifies articles that have cited the relevant publications. The Web of Science and Scopus databases can be used to perform citation analyses.

6. Perform the review

As the next step, the remaining titles are analyzed as to their content by reading them several times in full. Information is extracted according to defined criteria and the quality of the publications is evaluated. If the data extraction is carried out by more than one person, a training ensures that there will be no differences between the reviewers.

Depending on the research questions there exist diffent methods for data abstraction (content analysis, concept matrix etc.). A so-called concept matrix can be used to structure the content of information (Webster & Watson 2002, p. xvii). The image to the right gives an example of a concept matrix according to Becker (2014).

Particularly in the field of economic sciences, the evaluation of a study’s quality cannot be performed according to a generally valid scheme, such as those existing in the field of medicine, for instance. Quality assessment therefore depends largely on the research questions.

Based on the findings of individual studies, a meta-level is then applied to try to understand what similarities and differences exist between the publications, what research gaps exist, etc. This may also result in the development of a theoretical model or reference framework.

Example concept matrix (Becker 2013) on the topic Business Process Management

ArticlePatternConfigurationSimilarities
Thom (2008)x  
Yang (2009)x x
Rosa (2009) xx

7. Synthesizing results

Once the review has been conducted, the results must be compiled and, on the basis of these, conclusions derived with regard to the research question (Fink 2014, p. 199ff.). This includes, for example, the following aspects:

  • historical development of topics (histogram, time series: when, and how frequently, did publications on the research topic appear?);
  • overview of journals, authors or specialist disciplines dealing with the topic;
  • comparison of applied statistical methods;
  • topics covered by research;
  • identifying research gaps;
  • developing a reference framework;
  • developing constructs;
  • performing a meta-analysis: comparison of the correlations of the results of different empirical studies (see for example Fink 2014, p. 203 on conducting meta-analyses)

Publications about the method

Bandara, W., Furtmueller, E., Miskon, S., Gorbacheva, E., & Beekhuyzen, J. (2015). Achieving Rigor in Literature Reviews: Insights from Qualitative Data Analysis and Tool-Support.  Communications of the Association for Information Systems . 34(8), 154-204.

Booth, A., Papaioannou, D., and Sutton, A. (2012)  Systematic approaches to a successful literature review.  London: Sage.

Briner, R. B., & Denyer, D. (2012). Systematic Review and Evidence Synthesis as a Practice and Scholarship Tool. In Rousseau, D. M. (Hrsg.),  The Oxford Handbook of Evidenence Based Management . (S. 112-129). Oxford: Oxford University Press.

Durach, C. F., Wieland, A., & Machuca, Jose A. D. (2015). Antecedents and dimensions of supply chain robustness: a systematic literature review . International Journal of Physical Distribution & Logistic Management , 46 (1/2), 118-137. doi:  https://doi.org/10.1108/IJPDLM-05-2013-0133

Feak, C. B., & Swales, J. M. (2009). Telling a Research Story: Writing a Literature Review.  English in Today's Research World 2.  Ann Arbor: University of Michigan Press. doi:  10.3998/mpub.309338

Fink, A. (2014).  Conducting Research Literature Reviews: From the Internet to Paper  (4. Aufl.). Los Angeles, London, New Delhi, Singapore, Washington DC: Sage Publication.

Fisch, C., & Block, J. (2018). Six tips for your (systematic) literature review in business and management research.  Management Review Quarterly,  68, 103–106 (2018).  doi.org/10.1007/s11301-018-0142-x

Guba, B. (2008). Systematische Literaturrecherche.  Wiener Medizinische Wochenschrift , 158 (1-2), S. 62-69. doi:  doi.org/10.1007/s10354-007-0500-0  Hart, C.  Doing a literature review: releasing the social science research imagination.  London: Sage.

Jesson, J. K., Metheson, L. & Lacey, F. (2011).  Doing your Literature Review - traditional and Systematic Techniques . Los Angeles, London, New Delhi, Singapore, Washington DC: Sage Publication.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. doi: 10.1136/bmj.n71.

Petticrew, M. and Roberts, H. (2006).  Systematic Reviews in the Social Sciences: A Practical Guide . Oxford:Blackwell. Ridley, D. (2012).  The literature review: A step-by-step guide . 2nd edn. London: Sage. 

Chang, W. and Taylor, S.A. (2016), The Effectiveness of Customer Participation in New Product Development: A Meta-Analysis,  Journal of Marketing , American Marketing Association, Los Angeles, CA, Vol. 80 No. 1, pp. 47–64.

Tranfield, D., Denyer, D. & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review.  British Journal of Management , 14 (3), S. 207-222. doi:  https://doi.org/10.1111/1467-8551.00375

Webster, J., & Watson, R. T. (2002). Analyzing the Past to Prepare for the Future: Writing a Literature Review.  Management Information Systems Quarterly , 26(2), xiii-xxiii.  http://www.jstor.org/stable/4132319

Durach, C. F., Wieland, A. & Machuca, Jose. A. D. (2015). Antecedents and dimensions of supply chain robustness: a systematic literature review. International Journal of Physical Distribution & Logistics Management, 45(1/2), 118 – 137.

What is particularly good about this example is that search terms were defined by a number of experts and the review was conducted by three researchers working independently of one another. Furthermore, the search terms used have been very well extracted and the procedure of the literature selection very well described.

On the downside, the restriction to English-language literature brings the language bias into play, even though the authors consider it to be insignificant for the subject area.

Bos-Nehles, A., Renkema, M. & Janssen, M. (2017). HRM and innovative work behaviour: a systematic literature review. Personnel Review, 46(7), pp. 1228-1253

  • Only very specific keywords used
  • No precise information on how the review process was carried out (who reviewed articles?)
  • Only journals with impact factor (publication bias)

Jia, F., Orzes, G., Sartor, M. & Nassimbeni, G. (2017). Global sourcing strategy and structure: towards a conceptual framework. International Journal of Operations & Production Management, 37(7), 840-864

  • Research questions are explicitly presented
  • Search string very detailed
  • Exact description of the review process
  • 2 persons conducted the review independently of each other

Franziska Klatt

[email protected]

+49 30 314-29778

pittway l. (2008) systematic literature reviews

Privacy notice: The TU Berlin offers a chat information service. If you enable it, your IP address and chat messages will be transmitted to external EU servers. more information

The chat is currently unavailable.

Please use our alternative contact options.

Breadcrumbs Section. Click here to navigate to respective pages.

Systematic reviews to support evidence-based medicine, 2nd edition

Systematic reviews to support evidence-based medicine, 2nd edition

DOI link for Systematic reviews to support evidence-based medicine, 2nd edition

Get Citation

This highly-acclaimed book continues to serve as an essential text for all medical, surgical and health professionals who need an easily accessible reference to systematically reviewing the literature. The authors are veterans of more than 150 systematic reviews and have helped form policy and practice. They have ensured that this text, enhanced by case studies, continues to be the first reference for all health professionals undertaking literature reviews. The book provides the key steps to not only reviewing the literature but also assessing the credibility of recommendations in published reviews and practice guidelines.

TABLE OF CONTENTS

Chapter | 201  pages, introduction.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

Systematic Reviews in Educational Research: Methodology, Perspectives and Application

  • Open Access
  • First Online: 22 November 2019

Cite this chapter

You have full access to this open access chapter

pittway l. (2008) systematic literature reviews

  • Mark Newman 6 &
  • David Gough 6  

89k Accesses

161 Citations

2 Altmetric

This chapter explores the processes of reviewing literature as a research method. The logic of the family of research approaches called systematic review is analysed and the variation in techniques used in the different approaches explored using examples from existing reviews. The key distinctions between aggregative and configurative approaches are illustrated and the chapter signposts further reading on key issues in the systematic review process.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

pittway l. (2008) systematic literature reviews

Reviewing Literature for and as Research

pittway l. (2008) systematic literature reviews

Methodological Approaches to Literature Review

pittway l. (2008) systematic literature reviews

Literature Reviews and Systematic Reviews of Research: The Roles and Importance

1 what are systematic reviews.

A literature review is a scholarly paper which provides an overview of current knowledge about a topic. It will typically include substantive findings, as well as theoretical and methodological contributions to a particular topic (Hart 2018 , p. xiii). Traditionally in education ‘reviewing the literature’ and ‘doing research’ have been viewed as distinct activities. Consider the standard format of research proposals, which usually have some kind of ‘review’ of existing knowledge presented distinctly from the methods of the proposed new primary research. However, both reviews and research are undertaken in order to find things out. Reviews to find out what is already known from pre-existing research about a phenomena, subject or topic; new primary research to provide answers to questions about which existing research does not provide clear and/or complete answers.

When we use the term research in an academic sense it is widely accepted that we mean a process of asking questions and generating knowledge to answer these questions using rigorous accountable methods. As we have noted, reviews also share the same purposes of generating knowledge but historically we have not paid as much attention to the methods used for reviewing existing literature as we have to the methods used for primary research. Literature reviews can be used for making claims about what we know and do not know about a phenomenon and also about what new research we need to undertake to address questions that are unanswered. Therefore, it seems reasonable to conclude that ‘how’ we conduct a review of research is important.

The increased focus on the use of research evidence to inform policy and practice decision-making in Evidence Informed Education (Hargreaves 1996 ; Nelson and Campbell 2017 ) has increased the attention given to contextual and methodological limitations of research evidence provided by single studies. Reviews of research may help address these concerns when carried on in a systematic, rigorous and transparent manner. Thus, again emphasizing the importance of ‘how’ reviews are completed.

The logic of systematic reviews is that reviews are a form of research and thus can be improved by using appropriate and explicit methods. As the methods of systematic review have been applied to different types of research questions, there has been an increasing plurality of types of systematic review. Thus, the term ‘systematic review’ is used in this chapter to refer to a family of research approaches that are a form of secondary level analysis (secondary research) that brings together the findings of primary research to answer a research question. Systematic reviews can therefore be defined as “a review of existing research using explicit, accountable rigorous research methods” (Gough et al. 2017 , p. 4).

2 Variation in Review Methods

Reviews can address a diverse range of research questions. Consequently, as with primary research, there are many different approaches and methods that can be applied. The choices should be dictated by the review questions. These are shaped by reviewers’ assumptions about the meaning of a particular research question, the approach and methods that are best used to investigate it. Attempts to classify review approaches and methods risk making hard distinctions between methods and thereby to distract from the common defining logics that these approaches often share. A useful broad distinction is between reviews that follow a broadly configurative synthesis logic and reviews that follow a broadly aggregative synthesis logic (Sandelowski et al. 2012 ). However, it is important to keep in mind that most reviews have elements of both (Gough et al. 2012 ).

Reviews that follow a broadly configurative synthesis logic approach usually investigate research questions about meaning and interpretation to explore and develop theory. They tend to use exploratory and iterative review methods that emerge throughout the process of the review. Studies included in the review are likely to have investigated the phenomena of interest using methods such as interviews and observations, with data in the form of text. Reviewers are usually interested in purposive variety in the identification and selection of studies. Study quality is typically considered in terms of authenticity. Synthesis consists of the deliberative configuring of data by reviewers into patterns to create a richer conceptual understanding of a phenomenon. For example, meta ethnography (Noblit and Hare 1988 ) uses ethnographic data analysis methods to explore and integrate the findings of previous ethnographies in order to create higher-level conceptual explanations of phenomena. There are many other review approaches that follow a broadly configurative logic (for an overview see Barnett-Page and Thomas 2009 ); reflecting the variety of methods used in primary research in this tradition.

Reviews that follow a broadly aggregative synthesis logic usually investigate research questions about impacts and effects. For example, systematic reviews that seek to measure the impact of an educational intervention test the hypothesis that an intervention has the impact that has been predicted. Reviews following an aggregative synthesis logic do not tend to develop theory directly; though they can contribute by testing, exploring and refining theory. Reviews following an aggregative synthesis logic tend to specify their methods in advance (a priori) and then apply them without any deviation from a protocol. Reviewers are usually concerned to identify the comprehensive set of studies that address the research question. Studies included in the review will usually seek to determine whether there is a quantitative difference in outcome between groups receiving and not receiving an intervention. Study quality assessment in reviews following an aggregative synthesis logic focusses on the minimisation of bias and thus selection pays particular attention to homogeneity between studies. Synthesis aggregates, i.e. counts and adds together, the outcomes from individual studies using, for example, statistical meta-analysis to provide a pooled summary of effect.

3 The Systematic Review Process

Different types of systematic review are discussed in more detail later in this chapter. The majority of systematic review types share a common set of processes. These processes can be divided into distinct but interconnected stages as illustrated in Fig.  1 . Systematic reviews need to specify a research question and the methods that will be used to investigate the question. This is often written as a ‘protocol’ prior to undertaking the review. Writing a protocol or plan of the methods at the beginning of a review can be a very useful activity. It helps the review team to gain a shared understanding of the scope of the review and the methods that they will use to answer the review’s questions. Different types of systematic reviews will have more or less developed protocols. For example, for systematic reviews investigating research questions about the impact of educational interventions it is argued that a detailed protocol should be fully specified prior to the commencement of the review to reduce the possibility of reviewer bias (Torgerson 2003 , p. 26). For other types of systematic review, in which the research question is more exploratory, the protocol may be more flexible and/or developmental in nature.

A set of 9 labeled circles presents the following processes involved in a systemic review process. Developing research questions, coding studies, assessing the quality of studies, designing conceptual framework, selecting students using selection criteria, synthesizing results of individual studies to answer the review research questions, constructing selection criteria, developing a search strategy, and reporting findings.

The systematic review process

3.1 Systematic Review Questions and the Conceptual Framework

The review question gives each review its particular structure and drives key decisions about what types of studies to include; where to look for them; how to assess their quality; and how to combine their findings. Although a research question may appear to be simple, it will include many assumptions. Whether implicit or explicit, these assumptions will include: epistemological frameworks about knowledge and how we obtain it, theoretical frameworks, whether tentative or firm, about the phenomenon that is the focus of study.

Taken together, these produce a conceptual framework that shapes the research questions, choices about appropriate systematic review approach and methods. The conceptual framework may be viewed as a working hypothesis that can be developed, refined or confirmed during the course of the research. Its purpose is to explain the key issues to be studied, the constructs or variables, and the presumed relationships between them. The framework is a research tool intended to assist a researcher to develop awareness and understanding of the phenomena under scrutiny and to communicate this (Smyth 2004 ).

A review to investigate the impact of an educational intervention will have a conceptual framework that includes a hypothesis about a causal link between; who the review is about (the people), what the review is about (an intervention and what it is being compared with), and the possible consequences of intervention on the educational outcomes of these people. Such a review would follow a broadly aggregative synthesis logic. This is the shape of reviews of educational interventions carried out for the What Works Clearing House in the USA Footnote 1 and the Education Endowment Foundation in England. Footnote 2

A review to investigate meaning or understanding of a phenomenon for the purpose of building or further developing theory will still have some prior assumptions. Thus, an initial conceptual framework will contain theoretical ideas about how the phenomena of interest can be understood and some ideas justifying why a particular population and/or context is of specific interest or relevance. Such a review is likely to follow a broadly configurative logic.

3.2 Selection Criteria

Reviewers have to make decisions about which research studies to include in their review. In order to do this systematically and transparently they develop rules about which studies can be selected into the review. Selection criteria (sometimes referred to as inclusion or exclusion criteria) create restrictions on the review. All reviews, whether systematic or not, limit in some way the studies that are considered by the review. Systematic reviews simply make these restrictions transparent and therefore consistent across studies. These selection criteria are shaped by the review question and conceptual framework. For example, a review question about the impact of homework on educational attainment would have selection criteria specifying who had to do the homework; the characteristics of the homework and the outcomes that needed to be measured. Other commonly used selection criteria include study participant characteristics; the country where the study has taken place and the language in which the study is reported. The type of research method(s) may also be used as a selection criterion but this can be controversial given the lack of consensus in education research (Newman 2008 ), and the inconsistent terminology used to describe education research methods.

3.3 Developing the Search Strategy

The search strategy is the plan for how relevant research studies will be identified. The review question and conceptual framework shape the selection criteria. The selection criteria specify the studies to be included in a review and thus are a key driver of the search strategy. A key consideration will be whether the search aims to be exhaustive i.e. aims to try and find all the primary research that has addressed the review question. Where reviews address questions about effectiveness or impact of educational interventions the issue of publication bias is a concern. Publication bias is the phenomena whereby smaller and/or studies with negative findings are less likely to be published and/or be harder to find. We may therefore inadvertently overestimate the positive effects of an educational intervention because we do not find studies with negative or smaller effects (Chow and Eckholm 2018 ). Where the review question is not of this type then a more specific or purposive search strategy, that may or may not evolve as the review progresses, may be appropriate. This is similar to sampling approaches in primary research. In primary research studies using aggregative approaches, such as quasi-experiments, analysis is based on the study of complete or representative samples. In primary research studies using configurative approaches, such as ethnography, analysis is based on examining a range of instances of the phenomena in similar or different contexts.

The search strategy will detail the sources to be searched and the way in which the sources will be searched. A list of search source types is given in Box 1 below. An exhaustive search strategy would usually include all of these sources using multiple bibliographic databases. Bibliographic databases usually index academic journals and thus are an important potential source. However, in most fields, including education, relevant research is published in a range of journals which may be indexed in different bibliographic databases and thus it may be important to search multiple bibliographic databases. Furthermore, some research is published in books and an increasing amount of research is not published in academic journals or at least may not be published there first. Thus, it is important to also consider how you will find relevant research in other sources including ‘unpublished’ or ‘grey’ literature. The Internet is a valuable resource for this purpose and should be included as a source in any search strategy.

Box 1: Search Sources

The World Wide Web/Internet

Google, Specialist Websites, Google Scholar, Microsoft Academic

Bibliographic Databases

Subject specific e.g. Education—ERIC: Education Resources Information Centre

Generic e.g. ASSIA: Applied Social Sciences Index and Abstracts

Handsearching of specialist journals or books

Contacts with Experts

Citation Checking

New, federated search engines are being developed, which search multiple sources at the same time, eliminating duplicates automatically (Tsafnat et al. 2013 ). Technologies, including text mining, are being used to help develop search strategies, by suggesting topics and terms on which to search—terms that reviewers may not have thought of using. Searching is also being aided by technology through the increased use (and automation) of ‘citation chasing’, where papers that cite, or are cited by, a relevant study are checked in case they too are relevant.

A search strategy will identify the search terms that will be used to search the bibliographic databases. Bibliographic databases usually index records according to their topic using ‘keywords’ or ‘controlled terms’ (categories used by the database to classify papers). A comprehensive search strategy usually involves searching both a freetext search using keywords determined by the reviewers and controlled terms. An example of a bibliographic database search is given in Box 2. This search was used in a review that aimed to find studies that investigated the impact of Youth Work on positive youth outcomes (Dickson et al. 2013 ). The search is built using terms for the population of interest (Youth), the intervention of interest (Youth Work) and the outcomes of Interest (Positive Development). It used both keywords and controlled terms, ‘wildcards’ (the *sign in this database) and the Boolean operators ‘OR’ and ‘AND’ to combine terms. This example illustrates the potential complexity of bibliographic database search strings, which will usually require a process of iterative development to finalise.

Box 2: Search string example To identify studies that address the question What is the empirical research evidence on the impact of youth work on the lives of children and young people aged 10-24 years?: CSA ERIC Database

((TI = (adolescen* or (“young man*”) or (“young men”)) or TI = ((“young woman*”) or (“young women”) or (Young adult*”)) or TI = ((“young person*”) or (“young people*”) or teen*) or AB = (adolescen* or (“young man*”) or (“young men”)) or AB = ((“young woman*”) or (“young women”) or (Young adult*”)) or AB = ((“young person*”) or (“young people*”) or teen*)) or (DE = (“youth” or “adolescents” or “early adolescents” or “late adolescents” or “preadolescents”))) and(((TI = ((“positive youth development “) or (“youth development”) or (“youth program*”)) or TI = ((“youth club*”) or (“youth work”) or (“youth opportunit*”)) or TI = ((“extended school*”) or (“civic engagement”) or (“positive peer culture”)) or TI = ((“informal learning”) or multicomponent or (“multi-component “)) or TI = ((“multi component”) or multidimensional or (“multi-dimensional “)) or TI = ((“multi dimensional”) or empower* or asset*) or TI = (thriv* or (“positive development”) or resilienc*) or TI = ((“positive activity”) or (“positive activities”) or experiential) or TI = ((“community based”) or “community-based”)) or(AB = ((“positive youth development “) or (“youth development”) or (“youth program*”)) or AB = ((“youth club*”) or (“youth work”) or (“youth opportunit*”)) or AB = ((“extended school*”) or (“civic engagement”) or (“positive peer culture”)) or AB = ((“informal learning”) or multicomponent or (“multi-component “)) or AB = ((“multi component”) or multidimensional or (“multi-dimensional “)) or AB = ((“multi dimensional”) or empower* or asset*) or AB = (thriv* or (“positive development”) or resilienc*) or AB = ((“positive activity”) or (“positive activities”) or experiential) or AB = ((“community based”) or “community-based”))) or (DE=”community education”))

Detailed guidance for finding effectiveness studies is available from the Campbell Collaboration (Kugley et al. 2015 ). Guidance for finding a broader range of studies has been produced by the EPPI-Centre (Brunton et al. 2017a ).

3.4 The Study Selection Process

Studies identified by the search are subject to a process of checking (sometimes referred to as screening) to ensure they meet the selection criteria. This is usually done in two stages whereby titles and abstracts are checked first to determine whether the study is likely to be relevant and then a full copy of the paper is acquired to complete the screening exercise. The process of finding studies is not efficient. Searching bibliographic databases, for example, leads to many irrelevant studies being found which then have to be checked manually one by one to find the few relevant studies. There is increasing use of specialised software to support and in some cases, automate the selection process. Text mining, for example, can assist in selecting studies for a review (Brunton et al. 2017b ). A typical text mining or machine learning process might involve humans undertaking some screening, the results of which are used to train the computer software to learn the difference between included and excluded studies and thus be able to indicate which of the remaining studies are more likely to be relevant. Such automated support may result in some errors in selection, but this may be less than the human error in manual selection (O’Mara-Eves et al. 2015 ).

3.5 Coding Studies

Once relevant studies have been selected, reviewers need to systematically identify and record the information from the study that will be used to answer the review question. This information includes the characteristics of the studies, including details of the participants and contexts. The coding describes: (i) details of the studies to enable mapping of what research has been undertaken; (ii) how the research was undertaken to allow assessment of the quality and relevance of the studies in addressing the review question; (iii) the results of each study so that these can be synthesised to answer the review question.

The information is usually coded into a data collection system using some kind of technology that facilitates information storage and analysis (Brunton et al. 2017b ) such as the EPPI-Centre’s bespoke systematic review software EPPI Reviewer. Footnote 3 Decisions about which information to record will be made by the review team based on the review question and conceptual framework. For example, a systematic review about the relationship between school size and student outcomes collected data from the primary studies about each schools funding, students, teachers and school organisational structure as well as about the research methods used in the study (Newman et al. 2006 ). The information coded about the methods used in the research will vary depending on the type of research included and the approach that will be used to assess the quality and relevance of the studies (see the next section for further discussion of this point).

Similarly, the information recorded as ‘results’ of the individual studies will vary depending on the type of research that has been included and the approach to synthesis that will be used. Studies investigating the impact of educational interventions using statistical meta-analysis as a synthesis technique will require all of the data necessary to calculate effect sizes to be recorded from each study (see the section on synthesis below for further detail on this point). However, even in this type of study there will be multiple data that can be considered to be ‘results’ and so which data needs to be recorded from studies will need to be carefully specified so that recording is consistent across studies

3.6 Appraising the Quality of Studies

Methods are reinvented every time they are used to accommodate the real world of research practice (Sandelowski et al. 2012 ). The researcher undertaking a primary research study has attempted to design and execute a study that addresses the research question as rigorously as possible within the parameters of their resources, understanding, and context. Given the complexity of this task, the contested views about research methods and the inconsistency of research terminology, reviewers will need to make their own judgements about the quality of the any individual piece of research included in their review. From this perspective, it is evident that using a simple criteria, such as ‘published in a peer reviewed journal’ as a sole indicator of quality, is not likely to be an adequate basis for considering the quality and relevance of a study for a particular systematic review.

In the context of systematic reviews this assessment of quality is often referred to as Critical Appraisal (Petticrew and Roberts 2005 ). There is considerable variation in what is done during critical appraisal: which dimensions of study design and methods are considered; the particular issues that are considered under each dimension; the criteria used to make judgements about these issues and the cut off points used for these criteria (Oancea and Furlong 2007 ). There is also variation in whether the quality assessment judgement is used for excluding studies or weighting them in analysis and when in the process judgements are made.

There are broadly three elements that are considered in critical appraisal: the appropriateness of the study design in the context of the review question, the quality of the execution of the study methods and the study’s relevance to the review question (Gough 2007 ). Distinguishing study design from execution recognises that whilst a particular design may be viewed as more appropriate for a study it also needs to be well executed to achieve the rigour or trustworthiness attributed to the design. Study relevance is achieved by the review selection criteria but assessing the degree of relevance recognises that some studies may be less relevant than others due to differences in, for example, the characteristics of the settings or the ways that variables are measured.

The assessment of study quality is a contested and much debated issue in all research fields. Many published scales are available for assessing study quality. Each incorporates criteria relevant to the research design being evaluated. Quality scales for studies investigating the impact of interventions using (quasi) experimental research designs tend to emphasis establishing descriptive causality through minimising the effects of bias (for detailed discussion of issues associated with assessing study quality in this tradition see Waddington et al. 2017 ). Quality scales for appraising qualitative research tend to focus on the extent to which the study is authentic in reflecting on the meaning of the data (for detailed discussion of the issues associated with assessing study quality in this tradition see Carroll and Booth 2015 ).

3.7 Synthesis

A synthesis is more than a list of findings from the included studies. It is an attempt to integrate the information from the individual studies to produce a ‘better’ answer to the review question than is provided by the individual studies. Each stage of the review contributes toward the synthesis and so decisions made in earlier stages of the review shape the possibilities for synthesis. All types of synthesis involve some kind of data transformation that is achieved through common analytic steps: searching for patterns in data; Checking the quality of the synthesis; Integrating data to answer the review question (Thomas et al. 2012 ). The techniques used to achieve these vary for different types of synthesis and may appear more or less evident as distinct steps.

Statistical meta-analysis is an aggregative synthesis approach in which the outcome results from individual studies are transformed into a standardized, scale free, common metric and combined to produce a single pooled weighted estimate of effect size and direction. There are a number of different metrics of effect size, selection of which is principally determined by the structure of outcome data in the primary studies as either continuous or dichotomous. Outcome data with a dichotomous structure can be transformed into Odds Ratios (OR), Absolute Risk Ratios (ARR) or Relative Risk Ratios (RRR) (for detailed discussion of dichotomous outcome effect sizes see Altman 1991 ). More commonly seen in education research, outcome data with a continuous structure can be translated into Standardised Mean Differences (SMD) (Fitz-Gibbon 1984 ). At its most straightforward effect size calculation is simple arithmetic. However given the variety of analysis methods used and the inconsistency of reporting in primary studies it is also possible to calculate effect sizes using more complex transformation formulae (for detailed instructions on calculating effect sizes from a wide variety of data presentations see Lipsey and Wilson 2000 ).

The combination of individual effect sizes uses statistical procedures in which weighting is given to the effect sizes from the individual studies based on different assumptions about the causes of variance and this requires the use of statistical software. Statistical measures of heterogeneity produced as part of the meta-analysis are used to both explore patterns in the data and to assess the quality of the synthesis (Thomas et al. 2017a ).

In configurative synthesis the different kinds of text about individual studies and their results are meshed and linked to produce patterns in the data, explore different configurations of the data and to produce new synthetic accounts of the phenomena under investigation. The results from the individual studies are translated into and across each other, searching for areas of commonality and refutation. The specific techniques used are derived from the techniques used in primary research in this tradition. They include reading and re-reading, descriptive and analytical coding, the development of themes, constant comparison, negative case analysis and iteration with theory (Thomas et al. 2017b ).

4 Variation in Review Structures

All research requires time and resources and systematic reviews are no exception. There is always concern to use resources as efficiently as possible. For these reasons there is a continuing interest in how reviews can be carried out more quickly using fewer resources. A key issue is the basis for considering a review to be systematic. Any definitions are clearly open to interpretation. Any review can be argued to be insufficiently rigorous and explicit in method in any part of the review process. To assist reviewers in being rigorous, reporting standards and appraisal tools are being developed to assess what is required in different types of review (Lockwood and Geum Oh 2017 ) but these are also the subject of debate and disagreement.

In addition to the term ‘systematic review’ other terms are used to denote the outputs of systematic review processes. Some use the term ‘scoping review’ for a quick review that does not follow a fully systematic process. This term is also used by others (for example, Arksey and O’Malley 2005 ) to denote ‘systematic maps’ that describe the nature of a research field rather than synthesise findings. A ‘quick review’ type of scoping review may also be used as preliminary work to inform a fuller systematic review. Another term used is ‘rapid evidence assessment’. This term is usually used when systematic review needs to be undertaken quickly and in order to do this the methods of review are employed in a more minimal than usual way. For example, by more limited searching. Where such ‘shortcuts’ are taken there may be some loss of rigour, breadth and/or depth (Abrami et al. 2010 ; Thomas et al. 2013 ).

Another development has seen the emergence of the concept of ‘living reviews’, which do not have a fixed end point but are updated as new relevant primary studies are produced. Many review teams hope that their review will be updated over time, but what is different about living reviews is that it is built into the system from the start as an on-going developmental process. This means that the distribution of review effort is quite different to a standard systematic review, being a continuous lower-level effort spread over a longer time period, rather than the shorter bursts of intensive effort that characterise a review with periodic updates (Elliott et al. 2014 ).

4.1 Systematic Maps and Syntheses

One potentially useful aspect of reviewing the literature systematically is that it is possible to gain an understanding of the breadth, purpose and extent of research activity about a phenomenon. Reviewers can be more informed about how research on the phenomenon has been constructed and focused. This type of reviewing is known as ‘mapping’ (see for example, Peersman 1996 ; Gough et al. 2003 ). The aspects of the studies that are described in a map will depend on what is of most interest to those undertaking the review. This might include information such as topic focus, conceptual approach, method, aims, authors, location and context. The boundaries and purposes of a map are determined by decisions made regarding the breadth and depth of the review, which are informed by and reflected in the review question and selection criteria.

Maps can also be a useful stage in a systematic review where study findings are synthesised as well. Most synthesis reviews implicitly or explicitly include some sort of map in that they describe the nature of the relevant studies that they have identified. An explicit map is likely to be more detailed and can be used to inform the synthesis stage of a review. It can provide more information on the individual and grouped studies and thus also provide insights to help inform choices about the focus and strategy to be used in a subsequent synthesis.

4.2 Mixed Methods, Mixed Research Synthesis Reviews

Where studies included in a review consist of more than one type of study design, there may also be different types of data. These different types of studies and data can be analysed together in an integrated design or segregated and analysed separately (Sandelowski et al. 2012 ). In a segregated design, two or more separate sub-reviews are undertaken simultaneously to address different aspects of the same review question and are then compared with one another.

Such ‘mixed methods’ and ‘multiple component’ reviews are usually necessary when there are multiple layers of review question or when one study design alone would be insufficient to answer the question(s) adequately. The reviews are usually required, to have both breadth and depth. In doing so they can investigate a greater extent of the research problem than would be the case in a more focussed single method review. As they are major undertakings, containing what would normally be considered the work of multiple systematic reviews, they are demanding of time and resources and cannot be conducted quickly.

4.3 Reviews of Reviews

Systematic reviews of primary research are secondary levels of research analysis. A review of reviews (sometimes called ‘overviews’ or ‘umbrella’ reviews) is a tertiary level of analysis. It is a systematic map and/or synthesis of previous reviews. The ‘data’ for reviews of reviews are previous reviews rather than primary research studies (see for example Newman et al. ( 2018 ). Some review of reviews use previous reviews to combine both primary research data and synthesis data. It is also possible to have hybrid review models consisting of a review of reviews and then new systematic reviews of primary studies to fill in gaps in coverage where there is not an existing review (Caird et al. 2015 ). Reviews of reviews can be an efficient method for examining previous research. However, this approach is still comparatively novel and questions remain about the appropriate methodology. For example, care is required when assessing the way in which the source systematic reviews identified and selected data for inclusion, assessed study quality and to assess the overlap between the individual reviews (Aromataris et al. 2015 ).

5 Other Types of Research Based Review Structures

This chapter so far has presented a process or method that is shared by many different approaches within the family of systematic review approaches, notwithstanding differences in review question and types of study that are included as evidence. This is a helpful heuristic device for designing and reading systematic reviews. However, it is the case that there are some review approaches that also claim to use a research based review approach but that do not claim to be systematic reviews and or do not conform with the description of processes that we have given above at all or in part at least.

5.1 Realist Synthesis Reviews

Realist synthesis is a member of the theory-based school of evaluation (Pawson 2002 ). This means that it is underpinned by a ‘generative’ understanding of causation, which holds that, to infer a causal outcome/relationship between an intervention (e.g. a training programme) and an outcome (O) of interest (e.g. unemployment), one needs to understand the underlying mechanisms (M) that connect them and the context (C) in which the relationship occurs (e.g. the characteristics of both the subjects and the programme locality). The interest of this approach (and also of other theory driven reviews) is not simply which interventions work, but which mechanisms work in which context. Rather than identifying replications of the same intervention, the reviews adopt an investigative stance and identify different contexts within which the same underlying mechanism is operating.

Realist synthesis is concerned with hypothesising, testing and refining such context-mechanism-outcome (CMO) configurations. Based on the premise that programmes work in limited circumstances, the discovery of these conditions becomes the main task of realist synthesis. The overall intention is to first create an abstract model (based on the CMO configurations) of how and why programmes work and then to test this empirically against the research evidence. Thus, the unit of analysis in a realist synthesis is the programme mechanism, and this mechanism is the basis of the search. This means that a realist synthesis aims to identify different situations in which the same programme mechanism has been attempted. Integrative Reviewing, which is aligned to the Critical Realist tradition, follows a similar approach and methods (Jones-Devitt et al. 2017 ).

5.2 Critical Interpretive Synthesis (CIS)

Critical Interpretive Synthesis (CIS) (Dixon-Woods et al. 2006 ) takes a position that there is an explicit role for the ‘authorial’ (reviewer’s) voice in the review. The approach is derived from a distinctive tradition within qualitative enquiry and draws on some of the tenets of grounded theory in order to support explicitly the process of theory generation. In practice, this is operationalised in its inductive approach to searching and to developing the review question as part of the review process, its rejection of a ‘staged’ approach to reviewing and embracing the concept of theoretical sampling in order to select studies for inclusion. When assessing the quality of studies CIS prioritises relevance and theoretical contribution over research methods. In particular, a critical approach to reading the literature is fundamental in terms of contextualising findings within an analysis of the research traditions or theoretical assumptions of the studies included.

5.3 Meta-Narrative Reviews

Meta-narrative reviews, like critical interpretative synthesis, place centre-stage the importance of understanding the literature critically and understanding differences between research studies as possibly being due to differences between their underlying research traditions (Greenhalgh et al. 2005 ). This means that each piece of research is located (and, when appropriate, aggregated) within its own research tradition and the development of knowledge is traced (configured) through time and across paradigms. Rather than the individual study, the ‘unit of analysis’ is the unfolding ‘storyline’ of a research tradition over time’ (Greenhalgh et al. 2005 ).

6 Conclusions

This chapter has briefly described the methods, application and different perspectives in the family of systematic review approaches. We have emphasized the many ways in which systematic reviews can vary. This variation links to different research aims and review questions. But also to the different assumptions made by reviewers. These assumptions derive from different understandings of research paradigms and methods and from the personal, political perspectives they bring to their research practice. Although there are a variety of possible types of systematic reviews, a distinction in the extent that reviews follow an aggregative or configuring synthesis logic is useful for understanding variations in review approaches and methods. It can help clarify the ways in which reviews vary in the nature of their questions, concepts, procedures, inference and impact. Systematic review approaches continue to evolve alongside critical debate about the merits of various review approaches (systematic or otherwise). So there are many ways in which educational researchers can use and engage with systematic review methods to increase knowledge and understanding in the field of education.

https://ies.ed.gov/ncee/wwc/

https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit

https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2914

Abrami, P. C. Borokhovski, E. Bernard, R. M. Wade, CA. Tamim, R. Persson, T. Bethel, E. C. Hanz, K. & Surkes, M. A. (2010). Issues in conducting and disseminating brief reviews of evidence. Evidence & Policy , 6 (3), 371–389.

Google Scholar  

Altman, D.G. (1991) Practical statistics for medical research . London: Chapman and Hall.

Arksey, H. & O’Malley, L. (2005). Scoping studies: towards a methodological framework, International Journal of Social Research Methodology , 8 (1), 19–32,

Article   Google Scholar  

Aromataris, E. Fernandez, R. Godfrey, C. Holly, C. Khalil, H. Tungpunkom, P. (2015). Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. International Journal of Evidence-Based Healthcare , 13 .

Barnett-Page, E. & Thomas, J. (2009). Methods for the synthesis of qualitative research: A critical review, BMC Medical Research Methodology, 9 (59), https://doi.org/10.1186/1471-2288-9-59 .

Brunton, G., Stansfield, C., Caird, J. & Thomas, J. (2017a). Finding relevant studies. In D. Gough, S. Oliver & J. Thomas (Eds.), An introduction to systematic reviews (2nd edition, pp. 93–132). London: Sage.

Brunton, J., Graziosi, S., & Thomas, J. (2017b). Tools and techniques for information management. In D. Gough, S. Oliver & J. Thomas (Eds.), An introduction to systematic reviews (2nd edition, pp. 154–180), London: Sage.

Carroll, C. & Booth, A. (2015). Quality assessment of qualitative evidence for systematic review and synthesis: Is it meaningful, and if so, how should it be performed? Research Synthesis Methods 6 (2), 149–154.

Caird, J. Sutcliffe, K. Kwan, I. Dickson, K. & Thomas, J. (2015). Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach? Evidence & Policy, 11 (1), 81–97.

Chow, J. & Eckholm, E. (2018). Do published studies yield larger effect sizes than unpublished studies in education and special education? A meta-review. Educational Psychology Review 30 (3), 727–744.

Dickson, K., Vigurs, C. & Newman, M. (2013). Youth work a systematic map of the literature . Dublin: Dept of Government Affairs.

Dixon-Woods, M., Cavers, D., Agarwa, S., Annandale, E. Arthur, A., Harvey, J., Hsu, R., Katbamna, S., Olsen, R., Smith, L., Riley R., & Sutton, A. J. (2006). Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Medical Research Methodology 6:35 https://doi.org/10.1186/1471-2288-6-35 .

Elliott, J. H., Turner, T., Clavisi, O., Thomas, J., Higgins, J. P. T., Mavergames, C. & Gruen, R. L. (2014). Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Medicine, 11 (2): e1001603. http://doi.org/10.1371/journal.pmed.1001603 .

Fitz-Gibbon, C.T. (1984) Meta-analysis: an explication. British Educational Research Journal , 10 (2), 135–144.

Gough, D. (2007). Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Research Papers in Education , 22 (2), 213–228.

Gough, D., Thomas, J. & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1( 28).

Gough, D., Kiwan, D., Sutcliffe, K., Simpson, D. & Houghton, N. (2003). A systematic map and synthesis review of the effectiveness of personal development planning for improving student learning. In: Research Evidence in Education Library. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Gough, D., Oliver, S. & Thomas, J. (2017). Introducing systematic reviews. In D. Gough, S. Oliver & J. Thomas (Eds.), An introduction to systematic reviews (2nd edition, pp. 1–18). London: Sage.

Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., Kyriakidou, O. & Peacock, R. (2005). Storylines of research in diffusion of innovation: A meta-narrative approach to systematic review. Social Science & Medicine, 61 (2), 417–430.

Hargreaves, D. (1996). Teaching as a research based profession: possibilities and prospects . Teacher Training Agency Annual Lecture. Retrieved from https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2082 .

Hart, C. (2018). Doing a literature review: releasing the research imagination . London. SAGE.

Jones-Devitt, S. Austen, L. & Parkin H. J. (2017). Integrative reviewing for exploring complex phenomena. Social Research Update . Issue 66.

Kugley, S., Wade, A., Thomas, J., Mahood, Q., Klint Jørgensen, A. M., Hammerstrøm, K., & Sathe, N. (2015). Searching for studies: A guide to information retrieval for Campbell Systematic Reviews . Campbell Method Guides 2016:1 ( http://www.campbellcollaboration.org/images/Campbell_Methods_Guides_Information_Retrieval.pdf ).

Lipsey, M.W., & Wilson, D. B. (2000). Practical meta-analysis. Thousand Oaks: Sage.

Lockwood, C. & Geum Oh, E. (2017). Systematic reviews: guidelines, tools and checklists for authors. Nursing & Health Sciences, 19, 273–277.

Nelson, J. & Campbell, C. (2017). Evidence-informed practice in education: meanings and applications. Educational Research, 59 (2), 127–135.

Newman, M., Garrett, Z., Elbourne, D., Bradley, S., Nodenc, P., Taylor, J. & West, A. (2006). Does secondary school size make a difference? A systematic review. Educational Research Review, 1 (1), 41–60.

Newman, M. (2008). High quality randomized experimental research evidence: Necessary but not sufficient for effective education policy. Psychology of Education Review, 32 (2), 14–16.

Newman, M., Reeves, S. & Fletcher, S. (2018). A critical analysis of evidence about the impacts of faculty development in systematic reviews: a systematic rapid evidence assessment. Journal of Continuing Education in the Health Professions, 38 (2), 137–144.

Noblit, G. W. & Hare, R. D. (1988). Meta-ethnography: Synthesizing qualitative studies. Newbury Park, CA: Sage.

Book   Google Scholar  

Oancea, A. & Furlong, J. (2007). Expressions of excellence and the assessment of applied and practice‐based research. Research Papers in Education 22 .

O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., & Ananiadou, S. (2015). Using text mining for study identification in systematic reviews: a systematic review of current approaches. Systematic Reviews 4 (1): 5. https://doi.org/10.1186/2046-4053-4-5 .

Pawson, R. (2002). Evidence-based policy: the promise of “Realist Synthesis”. Evaluation, 8 (3), 340–358.

Peersman, G. (1996). A descriptive mapping of health promotion studies in young people, EPPI Research Report. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Petticrew, M. & Roberts, H. (2005). Systematic reviews in the social sciences: a practical guide . London: Wiley.

Sandelowski, M., Voils, C. I., Leeman, J. & Crandell, J. L. (2012). Mapping the mixed methods-mixed research synthesis terrain. Journal of Mixed Methods Research, 6 (4), 317–331.

Smyth, R. (2004). Exploring the usefulness of a conceptual framework as a research tool: A researcher’s reflections. Issues in Educational Research, 14 .

Thomas, J., Harden, A. & Newman, M. (2012). Synthesis: combining results systematically and appropriately. In D. Gough, S. Oliver & J. Thomas (Eds.), An introduction to systematic reviews (pp. 66–82). London: Sage.

Thomas, J., Newman, M. & Oliver, S. (2013). Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evidence and Policy, 9 (1), 5–27.

Thomas, J., O’Mara-Eves, A., Kneale, D. & Shemilt, I. (2017a). Synthesis methods for combining and configuring quantitative data. In D. Gough, S. Oliver & J. Thomas (Eds.), An Introduction to Systematic Reviews (2nd edition, pp. 211–250). London: Sage.

Thomas, J., O’Mara-Eves, A., Harden, A. & Newman, M. (2017b). Synthesis methods for combining and configuring textual or mixed methods data. In D. Gough, S. Oliver & J. Thomas (Eds.), An introduction to systematic reviews (2nd edition, pp. 181–211), London: Sage.

Tsafnat, G., Dunn, A. G., Glasziou, P. & Coiera, E. (2013). The automation of systematic reviews. British Medical Journal, 345 (7891), doi.org/10.1136/bmj.f139 .

Torgerson, C. (2003). Systematic reviews . London. Continuum.

Waddington, H., Aloe, A. M., Becker, B. J., Djimeu, E. W., Hombrados, J. G., Tugwell, P., Wells, G. & Reeves, B. (2017). Quasi-experimental study designs series—paper 6: risk of bias assessment. Journal of Clinical Epidemiology, 89 , 43–52.

Download references

Author information

Authors and affiliations.

Institute of Education, University College London, England, UK

Mark Newman & David Gough

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark Newman .

Editor information

Editors and affiliations.

Oldenburg, Germany

Olaf Zawacki-Richter

Essen, Germany

Michael Kerres

Svenja Bedenlier

Melissa Bond

Katja Buntins

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2020 The Author(s)

About this chapter

Newman, M., Gough, D. (2020). Systematic Reviews in Educational Research: Methodology, Perspectives and Application. In: Zawacki-Richter, O., Kerres, M., Bedenlier, S., Bond, M., Buntins, K. (eds) Systematic Reviews in Educational Research. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-27602-7_1

Download citation

DOI : https://doi.org/10.1007/978-3-658-27602-7_1

Published : 22 November 2019

Publisher Name : Springer VS, Wiesbaden

Print ISBN : 978-3-658-27601-0

Online ISBN : 978-3-658-27602-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Corpus ID: 70779961

Systematic reviews to support evidence-based medicine.

  • Khalid Khan , R. Kunz , +1 author G. Antes
  • Published 29 July 2011

131 Citations

Investigating clinical heterogeneity in systematic reviews: a methodologic review of guidance in the literature, how to write a review article, consensus-based recommendations for investigating clinical heterogeneity in systematic reviews, systematic review reporting - writing concisely and precisely, reporting systematic reviews: some lessons from a tertiary study, assessing the integrity of clinical trials included in evidence syntheses, book review: systematic reviews to support evidence-based medicine. second edition: from expert to novice and back again, development of an algorithm to provide awareness in choosing study designs for inclusion in systematic reviews of healthcare interventions: a method study, developing, conducting, and publishing appropriate systematic review and meta-analysis articles, a systematic review in select countries of the role of the pharmacist in consultations and sales of non‐prescription medicines in community pharmacy, related papers.

Showing 1 through 3 of 0 Related Papers

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.5(5); 2019 May

Logo of heliyon

Towards a pedagogical model of teaching with ICTs for mathematics attainment in primary school: A review of studies 2008–2018

This article reviews literature in the field of ICTs in teaching/learning mathematics at an elementary school level. The findings to date in the field of teaching with technology in mathematics classrooms are very conflictual, with some studies indicating that ICTs impact positively on achievement through altering pedagogy, while other studies indicate that the effect on achievement and pedagogy is in fact negative. The current paper seeks to address the conflictual data by analysing a variety of meta-analyses and studies in order to answer the following questions: Does pedagogy alter with the use of ICTs in grade 6 mathematics classrooms and if so, in what ways does it vary? Secondly, does student achievement in mathematics change with the use of ICTs as teaching tools and if so, in what ways does it do so? Findings from the review indicate that student achievement in mathematics can be positively impacted using technology, depending on the pedagogical practices used by teachers. Technology on its own appears to have no significant impact on student's attainment. There is a dearth of findings regarding pedagogical variation with ICTs outside of a single meta-analysis that indicates that a ‘constructivist’ approach to teaching/learning with technology is the most effective approach to developing students' conceptually. Due to this gap in the literature, the paper outlines a theoretical framework for providing a nuanced study of pedagogical variation with ICTs drawing on Cultural Historical Activity Theory and TPACK that can track pedagogical change along various dimensions.

1. Introduction

Almost all schools in the 21 st century make use of Information Communication Technologies (ICTs) as tools for teaching. Even in the developing world context, ICTs feature prominently in schools. The assumption underlying the use of ICTs in schools is that they impact positively on student outcomes. However, the extent to which ICTs can achieve this depends on how a computer is used as a learning/teaching tool: that is, how the computer affects pedagogical practices ( Li and Ma, 2011 ; Hardman, 2015 ). The research regarding the impact of ICTs on altering pedagogy is highly conflicting, with three distinctly different results reported: first, the research indicates that ICTs do not alter pedagogy ( Cassim, 2010 ); second, a body of work suggests that ICTs change pedagogy positively ( Webb and Cox, 2004 ; Bosamia, 2013 ) and finally, contradicting this finding there is research suggesting that ICTs negatively impact on pedagogy (see for example Hardman, 2015 ; Baker, 2019 ). In a bid to gain clarity regarding whether ICTs impact positively on mathematics attainment at primary school and, if so, what pedagogical practices appear most effective to achieve this outcome, the current paper presents a review of ten years of studies conducted in the field. Two research questions are addressed in this paper:

  • 1. Do ICTs impact positively on mathematics outcomes in elementary school and
  • 2. Does pedagogy alter with ICTs and, if so, in what ways does it alter?

2.1. Methodology

While this article is not a systematic review of the literature in the field, I draw on the logic underpinning systematic reviews in order to select the parameters for the review. A systematic analysis enables one to draw conclusions from a wide selection of evidence-based studies ( Dewey and Drahota, 2016 ). All studies within the research parameters are included, across a number of data bases, potentially lessening selection bias and setting out the parameters for other researchers to follow. This allows for what Pittway (2008) defines as the 7 key features of a systematic literature review; transparency, clarity, integration, focus equality, accessibility and coverage. These key features of a systematic review inform the current review, which is concerned exclusively with studies from 2008-2018. The need for such a review arises from the fact that of 13 meta analyses located regarding teaching/learning with ICTs in mathematics classrooms, only four fall within the search period of 2008–2018 ( Tamim et al., 2011 ; Higgins et al., 2012 ; Li and Ma, 2011 ; Cheung & Slavin, 2013 ). I note here that the search terms focus specifically on mathematics; the work of Chauhan (2017) is much broader than mathematics but is referenced in the current review in terms of student attainment). The use of ICTs in schools has grown steadily and there is a need to understand what more novel technology, such as mobile telephony or iPads is potentially having in mathematics classrooms. In a bid to develop a pedagogical model capable of outlining how best to teach with ICTs, the review also engaged theoretically with the body of knowledge that comes out of Cultural Historical Activity Theory ( Wood et al., 1976 ; Gallimore and Tharp, 1993 ; Hedegaard, 2002 , 2009 ; Hedegaard and Chaiklin, 2005 ; Mercer, 2000a , 2000b ; Cazden, 1986 , 2001 ; Wells, 1999 ; Piaget, 1976 , 1977 ; Engeström, 1999a , Engeström, 1999b , 1987 ) . As noted in the abstract, this review is concerned to answer the following question: Does pedagogy alter with the introduction of ICTs in grade 6 mathematics classrooms? The following questions are investigated in this review:

  • 1. How does pedagogy change in ICT based classrooms?
  • 2. What pedagogical practices are shown to be effective in ICT rich environments?
  • 3. What impact does teaching/learning with ICTs have on student attainment in grade 6 mathematics classrooms?

The question guiding this review served as a first instance of a parameter for inclusion/exclusion. Studies not related to mathematics or related to learning mathematics in secondary school and Higher Education were excluded. This is because mathematics becomes much more specialised as one leaves primary school, with teachers specialising in that subject. At a primary level, teachers teach all subjects as well as mathematics and may, therefore, not be as specialised in this content area as high school or higher education teachers. The review also limited itself to the decade of 2008–2018 as there is a body of research available about ICT use in mathematics prior to 2008 and much has altered in the technological field since then. The following steps were followed to retrieve data: Frist I searched only English data bases as I am a first language English speaker. These included, ERIC, EBSCOHOST; HUMANITIES INTL; JSTOR, PsycINFO, Education (A SAGE Full-Text Collection), COMPUTERS AND APPLIED SCIENCES, as well as Google Scholar. We used Boolean operators, parentheses, and wildcards to develop the following query: [(pedagogy) AND math* AND (ICTs* OR intervention* OR treatment*)]. I selected based on the following criteria:

  • 1. Studies fell between 2008-2018
  • 2. They addressed teaching/learning mathematics with ICTs in elementary schools
  • 3. They were in peer reviewed journal articles or books

I excluded unpublished dissertations or theses as I wanted to present a picture of what is published in the field. The grey literature I sourced was located with the assistance of a librarian from Google Scholar. Only one piece of grey literature met the criteria for inclusion. Altogether 37 studies were reviewed, 9 of which were meta-analyses and one which was a systematic review of the literature. The findings from the studies are presented below.

2.2. Findings: teaching mathematics at primary school with ICTs

2.2.1. math attainment with icts at a primary level.

In what follows, I address the following question outlined in the introduction: What impact does teaching/learning with ICTs have on student attainment in grade 6 mathematics classrooms?

The research regarding the impact of ICTs on mathematical attainment points clearly to the fact that ICTs, at a primary school level, do indeed impact positively on student outcomes ( Tamim et al., 2011 ; Higgins et al., 2012 ; Li and Ma, 2011 ; Cheung & Slavin, 2013 ; Demir and Basol, 2014 ; Chauhan, 2017 ; Slavin et al., 2009 ; Slavin and Lake, 2009 ; Rakes et al., 2010 ). Tamim et al. (2011) performed a second order meta-analysis drawing on 25 meta-analysis over a 40-year period and draw the conclusion that “the average students in classrooms where technology is used will perform 12 percentile points higher” than a student in a more traditional classroom where technology is not used ( Tamim et al., 2011 : 17). However, one meta-study ( Campuzano et al., 2009 ) indicates that mathematics attainment in primary school is not impacted at all by ICTs. This study is critiqued by Cheung & Slavin (2013) due to what they see a methodological flaw in the meta-analysis. They go on to illustrate in their own meta-analysis of 74 studies that attainment in mathematics is positively impacted by the use of ICTs. Further findings from the meta-analyses reviewed indicate that students with special needs benefit more from technological input than neurotypical students ( Li and Ma, 2011 ; Higgins et al., 2012 ) and primary school students benefit more from ICT use than secondary school students. Gender, race and socioeconomic status show no significant effects in the use of ICTs on mathematics attainment. Moreover, in relation to mathematics learning, Li and Ma (2011) show that students tested using non-standardised tests as opposed to traditional, standardised tests to measure attainment have more positive outcomes. Tamim et al. (2011) further indicate that computer technology that supports instruction is more effective than technology that offers direct instruction. This points to the importance of the pedagogical basis of ICT use. The authors are clear to highlight that technology on its own has little benefit to the student but rather, how it is used and how it is designed is important in determining whether or not it will be effective in improving student outcomes. This sentiment is echoed by the work of Higgins (2012) who state that “it is therefore the pedagogy of the application of technology in the classroom which is important: the how rather than the what” (3).

2.2.2. Pedagogical change with ICTs-what works?

While the discussion above indicates that technology impacts on attainment in mathematics, it points to the need to understand pedagogy as the dynamic force behind this change. In what follows the following two questions are addressed.

While there is a significant body of research that speaks to mathematical attainment with ICTs, there is a paucity of published studies that investigate variation in pedagogical practices with ICTs. The most detailed engagement with the question of pedagogy and ICTs is arguably that done by Webb and Cox (2004) , which is extremely dated. This is particularly problematic when one notes that the findings of positive mathematical attainment caution that this is only possible where pedagogical practices integrate and alter in order to meet students' diverse needs. Even more problematic is the finding in some research ( Hardman, 2010 , 2015 ) that the use of ICTs alters pedagogical practices negatively, therefore negatively impacting on students' outcomes. The dearth of research speaking to pedagogical variation with ICTs is problematic, further, in that one of the most significant findings in both historical research and current meta-analyses is that ICT needs to be integrated into pedagogy for there to be any significant gains made with technology ( Higgins et al., 2012 ). Having said this, some studies do point to what effective pedagogy with ICTs should look like, which speak to what must potentially alter in traditional pedagogy for the ICTs to be of benefit. The studies reviewed for this paper variously refer to the most effective pedagogical practices with ICTs as ‘collaborative’, ‘pupil centred’ or ‘constructivist’ ( Li and Ma, 2011 ; Rosen and Salomon, 2007 ). What underlies these terms is the notion that children are active cognising agents ( Piaget, 1976 ) who learn through structured engagement with more competent others ( Vygotsky, 1978 ) to construct novel knowledge. Pointing to a dearth in findings relating to pedagogy with ICTs is the fact that only one meta-analysis ( Rosen and Salomon, 2007 ) was located for this review and this fell outside of the review date periods selected. However, as it is the only meta-analysis comparing traditional and constructivist pedagogy and is referenced by many other studies (see for example Li and Ma, 2011 ), it is included in the current paper.

Rosen and Salomon (2007) carried out a meta-analysis on 32 experimental studies where they compared the outcomes of students' mathematical attainment with ICTs in 1) a constructivist pedagogy and 2) more traditional transmission-based pedagogy. For them, constructivist pedagogy is underpinned by the assumption that “real understanding of mathematics can be achieved when learners socially appropriate and actively construct knowledge” ( Rosen and Salomon, 2007 : 3). While the learning objectives of a more traditional transmission-based pedagogy “is to provide basic math knowledge and skills under conditions of traditional drill and practice learning” ( Rosen and Salomon, 2007 : 3). This echoes the definition of constructivist vs. traditional pedagogy suggested by Li and Ma (2011) who indicate that traditional pedagogy is teacher centred while constructivist pedagogy is student-centred. However, in neither of these meta-analyses do the authors outline exactly what constructivist pedagogy with computers should could look like. We are left with no sense of what dimensions of pedagogy might differ across the two contexts or indeed, how mediation occurs meaningfully in the constructivist context. Terms like ‘teacher and learner’ centred become mere rhetorical devices, then, not pointing to actual empirical realities. Taken to its logical conclusion, a radical constructivist view is, in my opinion, deeply problematic for teaching/learning in that it focuses solely on the student's capacity to construct knowledge, often side-lining the teacher's crucial role in this process. I have little doubt that students can indeed construct knowledge empirically, on their own, but what type of knowledge they construct when doing so is a matter for consideration. A 6-year-old who sees a dolphin will classify it as a fish because it lives in the sea, recognising that it has fins like a fish and therefore belongs to that class of animal. This of course, is a misconception; a dolphin has more in common with a cow than a fish. Without being actively taught in a structured manner by a teacher, the child cannot merely arrive at this knowledge on his/her own ( Karpov, 2005 ). While not presenting a detailed definition of what ‘constructivist’ pedagogy looks like, the meta-analyses, do point to differing effect sizes between traditional and constructivist pedagogy with ICT and this is useful in providing foundation for situating further research.

Rosen and Salomon (2007) found that, when tested against constructivist-appropriate criteria, students in a constructivist-based ICT lesson performed better than those in a traditional ICT based lesson. This meta-analysis is interesting in that it suggests that a constructivist-based pedagogy with ICTs leads to better attainment that a more traditional pedagogical approach. However, as the study is a meta-analysis and is concerned with experimental comparisons, the paper does not outline exactly what pedagogical practices work best in a constructivist pedagogy or indeed, how these differ, along which pedagogical dimensions, from a more traditional approach.

These findings are slightly more elaborated in a report by the Education Research Centre (2010) who propose a view of teaching with technology that takes note of Shulman's PCK (1987) as informing pedagogical practices as well as a constructivist view of teaching which they refer to as a didactic view. For them “The basic principle of the didactic view of learning and teaching was that knowledge is not something given out there , so to speak, but something to be explained …[…]…Knowledge is not a given, the theory says, but built up, and transformed, and – such was the watchword – transposed” ( Chevellard, 2007 , p.132, emphasis as original). How exactly one achieves this is slightly vague in their document. While there is little argument in psychological and educational settings about the primacy of teachers in developing students' knowledge, surprisingly few ICT studies focus exclusively on pedagogy with computers ( Webb and Cox, 2004 are a notable exception here, but their work is very early in the 21 st century). A concern with improving performance is the focus of much of the discussion, especially in relation to mathematics classrooms. There are, however, some key historical studies in the field that focus particularly on how ICTs influence pedagogy in schools. This research supports the findings by Rosen and Salomon (2007) , indicating that the most popular type of software available for use in schools is termed ‘constructivist’ software ( Becta, 2000 , 2001 , 2007 ) and the most prevalent use of this software plays out in a ‘constructivist’ environment. As we have seen above, what exactly is meant by constructivism can be quite opaque. However, historically, this term draws from the Piaget (1976) and Vygotsky (1978) notions of learning as an active process, requiring participation from the child.

Piaget, 1976 , Piaget, 1977 theory indicates that children learn actively through a process of assimilation, where they understand novel information in terms of pre-existing cognitive functions, and accommodation, where these existing structures shift because the novel knowledge clashes with what is already known. In a teaching scenario, this requires that children are afforded opportunities to interact with objects in order to develop their knowledge structures. The process, pedagogically, underlying this is called cognitive conflict and relies on the children being subject to a process of disequilibrium, where previous structures are insufficient to understand novel knowledge and must therefore shift and grow to accommodate for this knowledge ( Flavell, 1963 ). While recognising the importance of teaching in cognitive development, for Piaget (1977) teaching is a necessary but not sufficient explanation of cognitive development and development must necessarily precede learning. Conversely, Vygotsky, 1978 , Vygotsky, 1986 places teaching at the heart of cognitive development, indicating that learning leads to cognitive development, if it is properly organised. For Vygotsky, good learning, learning that leads to development, requires mediation, or structured guidance, of scientific/schooled concepts, within a unique social space called the Zone of Proximal Development (ZPD). The primary mediating tool, according to Vygotsky is semiotic mediation. Much research to date has indicated that Vygotsky's work is empirically sound in relation to learning/teaching in schools ( Daniels, 2001 ; Hardman, 2005 ; Wood et al., 1976 ; Gallimore and Tharp, 1993 ; Hedegaard, 2002 , 2009 ; Hedegaard and Chaiklin, 2005 ; Mercer, 2000a , 2000b ; Cazden, 1986 , 2001 ; Wells, 1999 ). These, then, are the theoretical foundations underpinning constructivist ICT programmes. For our purposes, this type of software can be understood as requiring some level of active construction on the student's behalf. The understanding of pedagogy as involving active, cognising agents engaged in problem solving in a mediated context leads to the following description of pedagogy mobilised in this review: a structured process whereby a culturally more experienced peer or teacher uses cultural tools to mediate or guide a novice into established, relatively stable ways of knowing and being within a particular, institutional context in such a way that the knowledge and skills the novice acquires lead to relatively lasting changes in the novice's behaviour, that is, learning ( Hardman, 2008 : 69). A description of what constitutes pedagogy, while theoretically based, needs to be operationalised if one is to study pedagogical change with ICTs in an actual classroom. Cultural Historical Activity Theory (CHAT) provides a necessary situating of human actions within the context in which they unfold and in the rest of the paper, this is explored as a framework for elaborating pedagogical change in ICT based lessons.

2.2.3. Towards a pedagogical framework for studying ICT use: A cultural Historical Activity Theory framework

Pedagogy is generally defined as the art, science or act of teaching ( Webb and Cox, 2004 ; Watkins and Mortimer, 1999 ). It is, therefore, the practice that one observes in a classroom. However, pedagogy is not limited solely to what one can observe; a teacher has certain ideas, beliefs and content knowledge that informs how s/he selects what is to be taught as well as how to teach it. Shulman (1986) referred to teachers' Pedagogical Content Knowledge (PCK) as that knowledge that informs how they are to teach as well as what they are to teach. This knowledge is largely invisible to the observer. Shulman's work indicated that knowledge of pedagogical context as well as curriculum knowledge was important in determining how a teacher taught. Investigating pedagogy, therefore, requires that one study a teacher's ideas, beliefs and attitudes towards teaching as well as observing his/her practices in a classroom context.

2.2.4. Pedagogical change with ICTs- CHAT and TPACK

The question that now arises is how best to study the complexity of pedagogical change with ICTs? While Vygotsky (1978 , 1986) postulated that tools (for him, primarily language, signs and symbols) could develop a child cognitively; in the 21 st research is now clear that indeed, tools such as language and indeed potentially, ICTs, alter the brain because of neuroplasticity, which indicates that synaptic connections in the brain are reorganised due to learning or injury ( Doidge, 2007 ; Sasmita et al., 2018 ). For Vygotsky (1978) the child is developed cognitively by a more competent other who mediates their engagement with a problem-solving task, using language, predominantly, as a cognitive tool. After his untimely death, Vygotsky's work was further developed by Leontiev (1981) with a specific focus on how practical activity serves a developmental purpose. This work in turn has been added to by Yrjo Engeström (1987) in his Cultural Historical Activity Theory approach to studying expansive, or evolutionary learning in work settings, where entire activity systems alter with the use of tools, along different sites or nodes in the activity system. Engeström focuses on an activity system as a basic unit of analysis, rather than focusing solely on an individual, as the individual's actions, thoughts and beliefs, are afforded or constrained by the system they act within. An activity system is represented as a triangle for ease of use; what is notable is that there are various nodes in the system and three mediating relationships in the activity.

If we look at Fig. 1 above, we can see a graphic representation of human action in an activity system. The subject is that person (individual or group) that is the focus of the investigation. The rules of the system afford and constrain behaviours, and these can be tacit as well as explicit. For Engeström (1999b) , mediating artefacts are tools and signs employed by the subject to act on an object. Tools alter the external world and signs alter the subject psychologically. This is in keeping with Vygotsky's notion of tools and signs. However, in a later chapter in Perspectives in Activity Theory (1999b) Engeström suggests that making a distinction between tools and signs is not useful, as internal cognitive tools can become externalised in an activity and external tools can similarly, become internalised during an activity. Indeed, something as obviously practical as a hammer can alter the subject cognitively as well as altering the nail which it strikes. Hence, the firm distinction between practical external tools and internal psychological tools, which derives from Vygotsky's work, is not maintained by Engeström. In this article, we can conceive of ICTs as tools that alter the system, depending on how they are used. Mediating artefacts/tools are used by the subject to act on the object of the activity. The object is that problem space that motivates the activity. The outcome is achieved through acting on the object. Division of labour refers to the roles that members of the system enact, and the community refers to all those involved in working on a shared object. If we relate this to a classroom that is using ICTs, we could imagine the following: the subject is the teacher, who uses ICTs (tools) to act on the object (mathematical understanding) of students using ICTs (mediating artefacts) to produce mathematically competent students (outcome). This happens against a background of each participant taking on a specific role (perhaps teacher teaches, student responds) in a community that shares the common object. The activity system is governed by rules to allow and constrain certain actions. It is important to note that an activity system does not exist in isolation as indicated in Fig. 1 above. Human action is too complex for a single activity system to exist at any one time ( Daniels, 2001 ). By means of an exemplar; I teach specific content at a university and am part of an activity system that focuses on this content. However, I am also a parent and inhabit that activity system too and similarly I am a member of the academy, yet another activity system, that affords and constrains what I can do in my lessons. Viewing pedagogical practices as activity systems enables us to analyse them according to the various nodes outlined above.

Figure 1

The activity system.

A further achievement of CHAT lies in its ability to track change within and between activity systems by focusing on ‘contradictions’ that arise within and between systems. Contradictions can be understood as ‘double-binds’ where actors in and across systems experience cognitive dissonance, leading to change. These dynamic sites of change are not necessarily positive; the introduction of ICTs into a school, for example, representing a novel tool, can disrupt relations within and between activity systems leading to contradictions that can impact negatively on pedagogy ( Hardman, 2008 , 2015 ).

While CHAT's description of human action as encompassing activity systems provides a strong basis for studying pedagogical change, one node in the system that could be developed further is the subject node. While Engeström, 1999a , Engeström, 1999b obviously appreciates the importance of the subject's beliefs and ideas in relation to their actions within the activity, he does not elaborate on this in relation to ICT use (although see Lim & Chai, 2004 , who do go some way to developing this using Engeström's work). Here I will introduce the notion of TPACK as a model for developing the subject position more fully to enable a deeper framework for studying pedagogical change with computers.

2.2.5. TPACK and CHAT-fleshing out the subject

Built on Shulman (1986) model of PCK, Technological Pedagogical Content Knowledge (TPACK) adds the dimension of technology to Shulman's initial model ( Mishra and Koehler, 2006 ). As noted above, pedagogy is extremely complex and studying it requires more than merely observing a teacher teach: one needs to know also how that teacher thinks and what beliefs and ideas impact on their teaching. Mishra and Koehler (2006) add the dimension of technological knowledge to Shulman's original work, highlighting how effective pedagogy with ICTs requires that a teacher is able to integrate technology into a lesson and must, therefore, be familiar with why they select certain technology to teach certain topics as opposed to others. The TPACK model describes an integrated connection between content knowledge, pedagogical knowledge and technological knowledge ( Srisawasdi, 2014 ; Voogt et al., 2013 ), illustrating the complexities of teachers' thinking in the 21 st century in relation to the use of technology. This complexity arises in large part because technology does not have a specific use; a pen for example has a specific use that is transparent and over time, the fact that a pen is a novel technology becomes largely forgotten and its use becomes habitual. ICTs however, are protean (they can be used in many different ways), opaque (how they work is not immediately observable to the user) and they are inherently unstable as they are subject to continuous change ( Mishra & Koehler, 2006 ). This model recognises the interaction of three forms of knowledge in pedagogical decision making ( Koehler, 2014 ):

  • • Technological knowledge (TK) includes not only knowing different kinds of hardware and software, but also knowing how to use them ( Angeli and Valanides, 2009 ).
  • • Technological content knowledge (TCK) requires that the teacher knows the ways in which the technology and content are linked and how ICTs can be used to alter subject matter.
  • • Technological pedagogical knowledge (TPK) entails knowing how the use of ICTs affords and/or constrain certain pedagogical practices as well as knowing how one can change one's approach to teaching using technology ( Ward & Benson, 2010 ).

Voogt et al. (2013) argue that the TPACK model needs to be seen not as an individual characteristic of a single teacher, but rather, as part of a wider system. For this they draw on Engeström's CHAT (1987) to elaborate a notion of pedagogy as socially situated. While their argument provides an interesting account of TPACK and especially of collaborative learning as situated, it does not go into depth about how pedagogy can be studied by linking TPACK and CHAT. For this article, I argue that TPACK can be seen as a part of the teacher's subject position (in an activity system) and should be investigated when mapping out any pedagogical activity with ICTs.

2.2.6. What is acquired through pedagogy? Scientific and everyday concepts

According to a Vygotskian perspective (1978, 1986; Hedegaard and Chaiklin, 2005 ) pedagogical practices, properly structured in a mediated manner, lead to the acquisition of what he terms ‘scientific’ concepts. These are not to be confused with concepts relating solely to the field of science but should rather be seen as academic concepts that are abstract and necessarily need to be taught. Vygotsky (1986) distinguishes between everyday concepts that a child can learn spontaneously and scientific concepts that a child learns through guided instruction. It is important to note, however, that these two separate types of concepts are dialectically entailed; the child understands the scientific in terms of his everyday and the everyday develops into abstraction through linking with the scientific ( Chaiklin & Hedegaard, 2013 ). The task of an effective teacher then, is to link scientific and everyday concepts in a manner that makes them meaningful to the child. For Hedegaard, scientific concepts embody theoretical knowledge and ‘Theoretical knowledge can be conceptualised as “symbolic tools” in the form of theories or models of subject-matter areas that can be used to understand and explain events and situations in (concrete life activities) and to organise action’ ( Hedegaard, 2002 : 30). This theoretical knowledge needs to be linked to the child's everyday lived experiences enabling them to analyse their context ( Chaiklin and Hedegaard, 2013 ). How theoretical knowledge is achieved, is clarified by Hedegaard in her double-move, which requires that ‘…the teacher guides the learning activity both from the perspective of general concepts and from the perspective of engaging students in “situated” problems that are meaningful in relation to their developmental stage and life situations’ ( Hedegaard, 1998 :120). ICTs, I would argue, are well placed to serve as tools to do this as they provide access to the child's lived experience in a way that a static textbook, for example, does not.

3. Conclusion

So where are we going on the path of teaching mathematics with technology? Findings from this review indicate that ICTs can impact positively on primary school mathematics performance provided that a constructivist pedagogy is used as opposed to a traditional transmission-based pedagogy. However, what exactly a constructivist pedagogy looks like in a classroom is not well operationalised in the literature reviewed. While the evidence suggests that pedagogy does indeed change with ICTs, the exact nature of this change remains opaque. To address this gap in the literature this paper has set out a theoretical framework for studying pedagogy with ICTs drawing on CHAT as a framework for situating human action within an activity system, allowing for someone to investigate pedagogy along the various dimensions outlined by CHAT, viz.: mediating artefacts; subject; rules; division of labour; object and outcomes.

Declarations

Author contribution statement.

All authors listed have significantly contributed to the development and the writing of this article.

Funding statement

This work was supported by the National Research Foundation under Grant Number CPRR150702122711. Any opinions, findings, conclusions and recommendations expressedhere are those of the author, and are not attributable to these organisations.

Competing interest statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Acknowledgements

This research was supported by the National Research Foundation under Grant Number CPRR150702122711. Any opinions, findings, conclusions and recommendations expressed here are those of the author, and are not attributable to these organisations.

  • Angeli C., Valanides N. Epistemological and methodological issues for the conceptualization, development, and assessment of ICT-TPCK: advances in technological pedagogical content knowledge (TPCK) Comput. Educ. 2009; 52 (1):154–168. [ Google Scholar ]
  • Baker J. Apr. 2019. 'Major distraction': Australian primary school dumps iPads, returns to paper textbooks. https://www.stuff.co.nz/technology/digital-living/111691580/major-distraction-australian-primary-school-dumps-ipads-returns-to-paper-textbooks [ Google Scholar ]
  • BECTA . 2000. ImpaCT2: Emerging findings from the evaluation of the impact of information and communications technologies on pupil attainment. http://www.becta.org.uk/research/reports/impact2/index.html Retrieved Feb 28, 2019, from. [ Google Scholar ]
  • BECTA . 2001. Primary schools of the future achieving today. A report to the DFEE. http://www.becta.org.uk Retrieved January 24, 2019 from. [ Google Scholar ]
  • BECTA . 2007. Annual review. http://www.becta.org.uk Retrieved Feb 3, 2019BECTA, from. [ Google Scholar ]
  • Bosamia M. Swami Sahajanand Group of Colleges; Bhavnagar: 2013. Conference: International Conference on “Disciplinary and Interdisciplinary Approaches to Knowledge Creation in Higher Education : CANADA & INDIA (GENESIS 2013):December 2013. [ Google Scholar ]
  • Campuzano L., Dynarski M., Agodini R., Rall K. Institute of Education Sciences; Washington, DC: 2009. Effectiveness of reading and Mathematics Software Products: Findings from Two Student Cohorts. [ Google Scholar ]
  • Cassim V. University of the Northwest; Potchefstroom: 2010. The pedagogical use of ICTs for teaching and learning within grade 8 classrooms in South Africa. Unpublished Masters thesis. [ Google Scholar ]
  • Cazden C.B. Classroom discourse. In: Wittrock M.C., editor. Handbook of Research on Teaching. A Project of the American Educational Research Association. Macmillan; New York: 1986. pp. 432–463. [ Google Scholar ]
  • Cazden C.B. Heinemann; Portsmouth NH: 2001. Classroom Discourse: the Language of Teaching and Learning. [ Google Scholar ]
  • Chaiklin S., Hedegaard M. Cultural-historical theory and education practice: some radical-local considerations. Nuances: estudos sobre Educação, Presidente Prudente, SP. 2013; 24 (1):30–44. [ Google Scholar ]
  • Chauhan S. A meta-analysis of the impact of technology on learning effectiveness in elementary schools. Comput. Educ. 2017; 105 :14–30. [ Google Scholar ]
  • Chevellard Y. Readjusting didactics to a changing epistemology. Eur. Educ. Res. J. 2007; 6 :131–134. [ Google Scholar ]
  • Cheung A.C.K., Slavin R.E. The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A met-analysis. Educ. Res. Rev. 2013; 9 :88–113. [ Google Scholar ]
  • Daniels H. Routledge; New York: 2001. Vygotsky and Pedagogy. [ Google Scholar ]
  • Demir S., Basol G. Effectiveness of computer-assisted mathematics education (CAME) over academic achievement: a meta-analysis study. Educ. Sci. Theor. Pract. 2014; 14 (5):2026–2035. [ Google Scholar ]
  • Dewey A., Drahota A. 2016. Introduction to Systematic Reviews: Online Learning Module Cochrane Training. https://training.cochrane.org/interactivelearning/module-1-introduction-conducting-systematic-reviews [ Google Scholar ]
  • Doidge N. Viking; New York: 2007. The Brain that Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science. [ Google Scholar ]
  • Engeström Y. Orienta-Konsultit Oy; Helsinki: 1987. Learning by Expanding: an Activity -theoretic Approach to Developmental Research. [ Google Scholar ]
  • Engeström Y. Activity Theory and individual and social transformation. In: Engeström Y., Miettinen R., Punamaki R.-L., editors. Perspectives on Activity Theory. Cambridge University Press; Cambridge: 1999. pp. 19–38. [ Google Scholar ]
  • Engeström Y. Innovative learning in work teams: analyzing cycles of knowledge creation in practice. In: Engeström Y., Miettinen R., Punamaki R.-L., editors. Perspectives on Activity Theory. Cambridge University Press; Cambridge: 1999. pp. 377–406. [ Google Scholar ]
  • Flavell J.H. Van Nostrand Company; New York: 1963. The Developmental Psychology of Jean Piaget. [ Google Scholar ]
  • Gallimore R., Tharp R. Teaching mind in society: teaching, schooling and literate discourse. In: Moll L., editor. Vygotsky and Education: Instructional Implications and Applications of Sociohistorical Psychology. Cambridge University Press; Cambridge: 1993. pp. 175–205. [ Google Scholar ]
  • Hardman J. An exploratory case study of computer use in a primary school mathematics classroom: new technology new pedagogy? Perspect. Educ. 2005; 23 (4):1–13. [ Google Scholar ]
  • Hardman J. Researching pedagogy: an activity theory approach. J. Educ. 2008; 45 :63–93. ISSN 0256-0100. [ Google Scholar ]
  • Hardman J. Variation in semiotic mediation across different pedagogical contexts. Educ. Change. 2010; 14 (1):91–106. ISSN 1682-3206. [ Google Scholar ]
  • Hardman J. Pedagogical variation with computers in mathematics classrooms: a Cultural Historical Activity Theory analysis. PINS. 2015; 48 :47–76. [ Google Scholar ]
  • Hedegaard M. Situated learning and cognition: theoretical learning and cognition. Mind, Cult., Act. 1998; 5 (2):114–126. [ Google Scholar ]
  • Hedegaard M. Aarhus University Press; Aarhus: 2002. Learning and Child Development: A Cultural–Historical Study. [ Google Scholar ]
  • Hedegaard M. Children’s development from a cultural-historical approach: children’s activity in everyday local settings as foundation for their development. Mind Cult. Act. 2009; 16 :64–81. [ Google Scholar ]
  • Hedegaard M., Chaiklin S. University of Aarhus Press; Aarhus: 2005. Radical-local Teaching and Learning. [ Google Scholar ]
  • Higgins S.E., Xiao Z., Katsipataki M. 2012. The impact of digital technology on learning: a summary for the Education Endowment Foundation. [ Google Scholar ]
  • Karpov V.,Y. Cambridge University Press; Cambridge: 2005. The Neo-Vygotskian Approach to Child Development. [ Google Scholar ]
  • Koehler M.J. 2014. TPACK Explained. http://matt-koehler.com/tpack2/ Retrieved from. [ Google Scholar ]
  • Leontiev A.N. The problem of activity in psychology. In: Wertsch J.V., editor. The Concept of Activity in Soviet Psychology. M.E. Sharpe; Armonk, N.Y.: 1981. [ Google Scholar ]
  • Li Q., Ma X. A meta-analysis of the effects of computer technology on school students’ mathematics learning. Educ. Psychol. Rev. 2011; 22 :215–243. [ Google Scholar ]
  • Lim C.P., Chai C.S. An activity theoretical approach to research of ICT integration in Singapore schools: Orienting activities and learner autonomy. Comput. Educ. 2004; 43 (1):215–236. [ Google Scholar ]
  • Mercer N. How is language used as a medium for classroom education? In: Moon B., Brown S., Ben-Perez M., editors. The Routledge International Companion to Education. Routledge; London: 2000. pp. 69–82. [ Google Scholar ]
  • Mercer N. Routledge; London: 2000. Words and Minds: How We Use Language to Think Together. [ Google Scholar ]
  • Mishra P., Koehler M.J. Technological pedagogical content knowledge: a framework for teacher knowledge. Teach. Coll. Rec. 2006; 108 (6):1017–1054. [ Google Scholar ]
  • Piaget J. Penguin; Harmondsworth: 1976. To Understand Is to Invent. [ Google Scholar ]
  • Piaget J. Blackwell; Oxford: 1977. The Development of Thought. [ Google Scholar ]
  • Pittway L. SAGE; London: 2008. Systematic literature reviews. In Thorpe, R., Holt, R. The SAGE dictionary of qualitative management research. [ Google Scholar ]
  • Rakes C.R., Valentine J.C., McGatha M.B., Ronau R.N. Methods of instructional improvement in algebra: a systematic review and meta-analysis. Rev. Educ. Res. 2010; 80 (3):372–400. [ Google Scholar ]
  • Rosen Y., Salomon G. The differential learning achievements of constructivist technology-intensive learning environments as compared with traditional ones: a meta-analysis. J. Educ. Comput. Res. 2007; 36 (1):1–14. [ Google Scholar ]
  • Sasmita A.O., Kuruvilla J., Ling A.P.K. Harnessing neuroplasticity: modern approaches and clinical future. Int. J. Neurosci. 2018:1–17. [ PubMed ] [ Google Scholar ]
  • Shulman L.S. Those who understand: knowledge growth in teaching. Educ. Res. 1986; 15 (2):4–31. [ Google Scholar ]
  • Shulman L. Knowledge and teaching; foundations of the new reform. Harvard Educ. Rev. 1987; 57 (1):1–22. [ Google Scholar ]
  • Slavin R.E., Lake C. Effective programs in elementary mathematics: a best evidence synthesis. Rev. Educ. Res. 2009; 78 (3):427–455. [ Google Scholar ]
  • Slavin R.E., Lake C., Groff C. Effective programs in middle and high school mathematics: a best evidence synthesis. Rev. Educ. Res. 2009; 79 (2):839–911. [ Google Scholar ]
  • Srisawasdi N. Developing Technological Pedagogical Content knowledge in using, computerised science laboratory environment: an arrangement for Science teacher education program. Res. Pract. Technol. Enhanc. Learn. (RPTEL) 2014; 9 (1):123–144. [ Google Scholar ]
  • Tamim R.M., Bernard R.M., Borokhovski E., Abrami P.C., Schmid R.F. What forty years of research says about the impact of technology on learning: a second-order meta-analysis and validation study. Rev. Educ. Res. 2011; 81 (1):4–28. [ Google Scholar ]
  • Voogt J., Visser P., Pareja N., Tondeur J., van Braak J. Technological pedagogical content knowledge- a review of the literature. J. Comput. Assist. Learn. 2013; 29 (2) 190-121. [ Google Scholar ]
  • Vygotsky L.S. Mind in Society. The Development of Higher Psychological Processes. Harvard University Press; Cambridge, MA: 1978. (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Trans.) [ Google Scholar ]
  • Vygotsky L.S. Thought and Language. MIT Press; Cambridge, MA: 1986. (E. Hanfmann & G. Vakar, Trans.) [ Google Scholar ]
  • Watkins C., Mortimore P. Pedagogy: what do we know? In: Mortimore P., editor. Understanding Pedagogy and its Impact on Learning. Paul Chapman Publishing; London: 1999. [ Google Scholar ]
  • Ward C.L., Benson S.N. Developing new schemas for online teaching and learning: TPACK. MERLOT. J. Online Learn. Teach. 2010; 6 :482–490. [ Google Scholar ]
  • Webb M., Cox M. A review of pedagogy related to information and communications technology. Technol. Pedagog. Educ. 2004; 13 (3):235–286. [ Google Scholar ]
  • Wells G. Cambridge University Press; Cambridge: 1999. Dialogic Inquiry. Towards a Socio-Cultural Practice and Theory of Education. [ Google Scholar ]
  • Wood D., Bruner J.S., Ross G. The role of tutoring in problem solving. JCPP (J. Child Psychol. Psychiatry) 1976; 17 :89–100. [ PubMed ] [ Google Scholar ]
  • Mathematical Software
  • Computing in Mathematics
  • Computer Science
  • Computing in Mathematics, Natural Science, Engineering and Medicine

The SAGE Dictionary of Qualitative Management Research

  • January 2008
  • ISBN: 9781412935210
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Robin Holt at University of Bristol

  • University of Bristol

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Boyeth Pelone

  • Debadutta Mishra
  • SUPPORT CARE CANCER

Barry D Bultz

  • Brian J. Kelly
  • Zeev Rosberger

Fiona Schulte

  • Norshamshizar Abdul Jalil

Karmila Rafiqah M. Rafiq

  • DISABIL REHABIL

Clare Killingback

  • Mark A. Thompson
  • Marion Nettleton

Andrew Simpson

  • Oskars Kaulēns
  • Renan P. de Oliveira

Guido Carim Junior

  • Bruno Pereira
  • Marilyn Andre
  • Sandra Perez
  • Sabhyta Sabharwal

April Nakayoshi

  • Int J Environ Res Publ Health
  • Teriana Moore

Pamela Payne Foster

  • JoAnn S Oliver

Rebecca S Allen

  • Bassirou Tidjani

Denni Saragih

  • David Siagian

Jorge Mendoza Woodman

  • Elvira Marques

Chris Campbell

  • Gui Lohmann

João Carlos Gonçalves dos Reis

  • John A. Williams

Aytaç Burak Dereli

  • Rebecca M. Sánchez
  • Brittney Anderson

Ayten Nahide Korkmaz

  • Alissa McIntyre
  • Hee Yun Lee

Susan Rose

  • Aijaz Ahmad Reshi

Arif Shah

  • Shabana Shafi
  • Majid Hussain Qadri

Thomas Drepper

  • Kevin F. Steinmetz
  • Sarah C Montembeau
  • Candace D. Speight

Faisal Merchant

  • Birju R. Rao

Rebecca Wilson-Mah

  • Alberto Innocenti
  • Francesco Musco

Ann-Kathrin McLean

  • Yuli Kurniawati Sugiyo Pranoto

Branislav Pupala

  • Ahmed Mamdouh Abdelfatah Ibrahim

Norris Syed Abdullah

  • Int J Ment Health Nurs

Richard Lakeman

  • J CARD FAIL
  • Eli R. Abernethy

Neal Dickert

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IMAGES

  1. Systematic literature review phases.

    pittway l. (2008) systematic literature reviews

  2. The steps of Systematic Literature Review

    pittway l. (2008) systematic literature reviews

  3. Systematic Literature Review Methodology

    pittway l. (2008) systematic literature reviews

  4. The three phases involved in a Systematic Literature Review process

    pittway l. (2008) systematic literature reviews

  5. systematic literature review steps

    pittway l. (2008) systematic literature reviews

  6. Systematic literature review phases and activities

    pittway l. (2008) systematic literature reviews

COMMENTS

  1. Literature Review: Systematic literature reviews

    A systematic literature review (SLR) identifies, selects and critically appraises research in order to answer a clearly formulated question (Dewey, A. & Drahota, A. 2016). The systematic review should follow a clearly defined protocol or plan where the criteria is clearly stated before the review is conducted. ... Pittway, L. (2008) Systematic ...

  2. PDF The Role of Systematic Literature Reviews in ...

    the systematic literature review (SLR) seeks emergent themes and conflicts, assesses both the inherent quality of studies, as well as, the relevance of the data and findings to the research ...

  3. Systematic Reviews

    The review identifies the type of information searched, critiqued and reported within known timeframes. The search terms, search strategies (including database names, platforms, dates of search) and limits all need to be included in the review. Pittway (2008) outlines seven key principles behind systematic literature reviews. Transparency; Clarity

  4. Literature Searches and Reviews

    A more formal and rigorous literature review is a systematic review. While undertaking a systematic review is not typically needed in undergraduate studies, a brief overview here will help with the understanding of the limitations of narrative literature reviews. ... Pittway, L. (2008). In R. Thorpe & R. Holt (Eds.), Systematic literature ...

  5. ‪Luke Pittaway‬

    Using knowledge within small and medium‐sized firms: A systematic review of the evidence. R Thorpe, R Holt, A Macpherson, L Pittaway. International Journal of Management Reviews 7 (4), 257-281. , 2005. 1089. 2005. Simulating entrepreneurial learning: Integrating experiential and collaborative approaches to learning.

  6. Systematic Literature Review

    To guarantee the rigor in knowledge generation through LR, it is fundamental that the research is performed systematically. In the light of that emerges the SLR, a method for synthesizing the research findings in a systematic, transparent, and reproducible way (Littell et al. 2008).In general, SLR aims to identify all empirical evidence that fits the pre-specified inclusion criteria to answer ...

  7. PDF A Guide to Conducting a Standalone Systematic Literature Review

    Communications of the Association for Information Systems, 2015, 37. �hal-01574600�. Communications of the Association for Information Systems. Volume 37 Article 43 11-2015. A Guide to Conducting a Standalone Systematic Literature Review. Chitu Okoli. Concordia University, [email protected].

  8. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  9. Systematic reviews to support evidence-based medicine, 2nd edition

    Authoritative, clear, concise, and practical, this highly acclaimed book continues to be an essential text for all medical, surgical and health professionals who want to have an easily accessible, quick reference to systematically reviewing the literature. Learn about the key steps to reviewing the literature Carry out your own reviews with expert guidance Assess the credibility of ...

  10. Systematic reviews of the literature: an introduction to current

    Systematic reviews serve different purposes and use a different methodology than other types of evidence synthesis that include narrative reviews, scoping reviews, and overviews of reviews. Systematic reviews can address questions regarding effects of interventions or exposures, diagnostic properties of tests, and prevalence or prognosis of ...

  11. Description of the Systematic Literature Review Method

    A systematic literature review (SLR) ... (Guba 2008, p. 63). The so-called document-term matrix, which lists topic blocks and search terms according to a scheme, is helpful in this regard. The aim is to identify as many different synonyms as possible for the partial terms. A precisely formulated research question facilitates the identification ...

  12. Public participation in futuring: A systematic literature review

    For this reason, we chose to conduct a systematic literature review (Pittway, 2008) of peer-reviewed journal articles in order to first describe and then offer a critique of the field (Xiao & Watson, 2019). We acknowledge that ample futures work happens outside of academic research, however, the details of processes that take place in corporate ...

  13. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  14. Systematic reviews to support evidence‐based medicine: how to review

    In conclusion, this book provides a step-by-step guide to undertaking a systematic review, using examples from published reviews to illustrate basic principles. The book is essential reading for all new systematic reviewers, including masters and PhD level students undertaking, for example, literature review and critical appraisal modules.

  15. (PDF) Systematic Literature Reviews

    Eva Gallardo-Gallardo, PhD. These authors characterize review types according tot he attention given to. each of four critical steps in the review process: 1) SEARCH è search strategy ...

  16. Systematic reviews to support evidence-based medicine, 2nd edition

    This highly-acclaimed book continues to serve as an essential text for all medical, surgical and health professionals who need an easily accessible reference to systematically reviewing the literature. The authors are veterans of more than 150 systematic reviews and have helped form policy and practice.

  17. Systematic Reviews in Educational Research: Methodology, Perspectives

    A literature review is a scholarly paper which provides an overview of current knowledge about a topic. It will typically include substantive findings, as well as theoretical and methodological contributions to a particular topic (Hart 2018, p. xiii).Traditionally in education 'reviewing the literature' and 'doing research' have been viewed as distinct activities.

  18. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  19. Systematic reviews to support evidence-based medicine

    Systematic reviews to support evidence-based medicine. Khalid Khan, R. Kunz, +1 author. G. Antes. Published 29 July 2011. Medicine. TLDR. This book highlights the core information necessary for planning and preparing reviews and focuses on a clinical readership and new reviewers, not on experienced epidemiologists, social scientists, medical ...

  20. How to do a systematic review: A best practice guide for conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  21. Towards a pedagogical model of teaching with ICTs for mathematics

    While this article is not a systematic review of the literature in the field, I draw on the logic underpinning systematic reviews in order to select the parameters for the review. ... Pittway L. SAGE; London: 2008. Systematic literature reviews. In Thorpe, R., Holt, R. The SAGE dictionary of qualitative management research. [Google Scholar]

  22. The SAGE Dictionary of Qualitative Management Research

    A systematic literature review (SLR) is defined as a review of highly structured questions depending on systematic and explicit methods to identify, select and critically appraise relevant ...