Literature reviews are an integral office of the procedure and communication of scientific research. Whilst systematic reviews have become regarded as the highest standard of show synthesis, many literature reviews fall short of these standards and may end upwardly presenting biased or incorrect conclusions. In this post, Neal Haddaway highlights 8 common problems with literature review methods, provides examples for each and provides applied solutions for means to mitigate them.


Researchers regularly review the literature – it's an integral part of day-to-day inquiry: finding relevant enquiry, reading and digesting the main findings, summarising across papers, and making conclusions about the prove base of operations as a whole. However, in that location is a fundamental deviation betwixt brief, narrative approaches to summarising a selection of studies and attempting to reliably and comprehensively summarise an evidence base to support decision-making in policy and practise.

So-called 'evidence-informed determination-making' (EIDM) relies on rigorous systematic approaches to synthesising the testify. Systematic review has become the highest standard of evidence synthesis and is well established in the pipeline from research to practice in the field of wellness. Systematic reviews must include a suite of specifically designed methods for the deport and reporting of all synthesis activities (planning, searching, screening, appraising, extracting information, qualitative/quantitative/mixed methods synthesis, writing; e.1000. see the Cochrane Handbook). The method has been widely adapted into other fields, including environment (the Collaboration for Environmental Bear witness) and social policy (the Campbell Collaboration).

Despite the growing interest in systematic reviews, traditional approaches to reviewing the literature continue to persist in contemporary publications across disciplines. These reviews, some of which are incorrectly referred to equally 'systematic' reviews, may exist susceptible to bias and every bit a issue, may cease up providing incorrect conclusions. This is of particular concern when reviews address cardinal policy- and practise- relevant questions, such equally the ongoing COVID-19 pandemic or climate change.

These limitations with traditional literature review approaches could be improved relatively easily with a few key procedures; some of them non prohibitively costly in terms of skill, time or resources.

In our recent paper in Nature Environmental and Evolution, we highlight 8 common problems with traditional literature review methods, provide examples for each from the field of environmental management and environmental, and provide applied solutions for ways to mitigate them.

Problem Solution
Lack of relevance – limited stakeholder date tin produce a review that is of limited applied use to conclusion-makers Stakeholders tin be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets – bank check out best-do guidance
Mission creep – reviews that don't publish their methods in an a priori protocol can suffer from shifting goals and inclusion criteria Carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, critical appraisal and synthesis in detail. Make utilise of existing organisations to support y'all (eastward.thou. the Collaboration for Environmental Evidence).
A lack of transparency/replicability in the review methods may mean that the review cannot exist replicated – a central tenet of the scientific method! Exist explicit, and make utilise of high-quality guidance and standards for review conduct (due east.thousand. CEE Guidance) and reporting (PRISMA or ROSES)
Selection bias (where included studies are non representative of the prove base) and a lack of comprehensiveness (an inappropriate search method) can hateful that reviews terminate upward with the wrong evidence for the question at hand Carefully design a search strategy with an info specialist; trial the search strategy (confronting a benchmark list); utilize multiple bibliographic databases/languages/sources of gray literature; publish search methods in an a priori protocol for peer-review
The exclusion of grey literature and failure to test for evidence of publication bias can upshot in incorrect or misleading conclusions Include attempts to find grey literature, including both 'file-drawer' (unpublished academic) research and organisational reports. Test for possible evidence of publication bias.
Traditional reviews often lack appropriate critical appraisement of included written report validity, treating all evidence as equally valid – nosotros know some research is more than valid and we demand to business relationship for this in the synthesis. Carefully plan and trial a critical appraisement tool before starting the process in full, learning from existing robust critical appraisal tools.
Inappropriate synthesis (e.m. using vote-counting and inappropriate statistics) can negate all of the preceding systematic try. Vote-counting (tallying studies based on their statistical significance) ignores report validity and magnitude of effect sizes. Select the synthesis method advisedly based on the data analysed. Vote-counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarise and describe the evidence base of operations.

There is a lack of awareness and appreciation of the methods needed to ensure systematic reviews are as free from bias and as reliable every bit possible: demonstrated by recent, flawed, high-profile reviews. We call on review authors to conduct more rigorous reviews, on editors and peer-reviewers to gate-go on more strictly, and the community of methodologists to better support the broader enquiry community. Only past working together can we build and maintain a strong organisation of rigorous, evidence-informed conclusion-making in conservation and environmental management.


Note: This article gives the views of the authors, and non the position of the LSE Impact Weblog, nor of the London Schoolhouse of Economics. Please review our comments policy if y'all have any concerns on posting a comment below

Image credit: Jaeyoung Geoffrey Kang via unsplash


Print Friendly, PDF & Email