11: What can go wrong?

Photo by Google DeepMind from Pexels used under Pexels Licence.

The process of developing a review is complex. This section looks at some common issues that we have seen in reviews conducted by less experienced reviewers.

Problems with the research question or objectives

  • Question cannot be answered – either too broad or too vague.
  • Review deviates from initial objectives (lack of protocol) without justification or explanation.

Unclear method

  • Method not reported in sufficient detail to allow the review to be replicated; lack of transparency.
  • Studies selected for inclusion not representative of evidence base (selection bias), indicating inappropriate search strategy.

Issues with data extraction

  • Extracts from study (particularly the abstract) just copied (without understanding that this is a form of plagiarism).
  • Study design not recorded or problems recognising the study design used (often these are poorly described).
  • Too little detail extracted from studies so synthesis becomes problematic (e.g. the review only reports number of participants).
  • This was an issue with the art therapy example, where the review authors provided too little detail, which limited interpretation of the findings. Regardless, there was not sufficient evidence to conclusively determine whether art therapy was effective.
  • Study outcomes and study findings confused (i.e. mix up study conclusions with what researchers measured).
  • Relevant, available numerical information not included.
  • Lack of consistency in data extracted from different studies.
  • Do not record (leave blank) that relevant data is not reported in included study.
  • Studies given a name or identifier without a full reference (e.g. just Jones 2012, Article 1, Study 2).
  • Unable to fit information into table/grid appropriately or effectively.

Problems with analysis or reporting

  • Do not provide synthesis across studies but report summary of each included study.
  • Analysis inappropriate (e.g. uses vote counting rather than narrative synthesis, ignores study design).
  • Assumes because a study is published it must be good (i.e. fail to understand importance of critical appraisal, do not consider study quality or do not report quality assessment of studies).

Uttley and colleagues[1] looked at 485 articles highlighting problems in published systematic reviews and found 67 specific issues, which they grouped into four main areas or domains:

  • Comprehensiveness, or the completeness of the searches conducted
  • Rigour, or appropriateness of methods used
  • Transparency, or how clear the processes used are described
  • Objectivity , or the efforts taken to minimise bias.

  1. Uttley, L., Quintana, D. S., Montgomery, P., Carroll, C., Page, M. J., Falzon, L., ... & Moher, D. (2023). The problems with systematic reviews: A living systematic review. Journal of Clinical Epidemiology, 156, 30–41, April. https://doi.org/10.1016/j.jclinepi.2023.01.011
definition

Licence

Share This Book