A Research Protocol for a Systematic Review of Automatic Literature Screening in Medical Evidence Synthesis


 Background: Systematic review is an indispensable tool for optimal evidence collection and evaluation in evidence-based medicine. However, the explosive increase of the original literatures makes it difficult to accomplish critical appraisal and regular update. Artificial intelligence (AI) algorithms have been applied to automate the literature screening procedure in medical systematic reviews. In these studies, different algorithms were used and results with great variance were reported. It is therefore imperative to systematically review and analyse the developed automatic methods for literature screening and their effectiveness reported in current studies.Methods: An electronic search will be conducted using PubMed, Embase and IEEE Xplore Digital Library databases, as well as literatures found through supplementary search in Google scholar, on automatic methods for literature screening in systematic reviews. Two reviewers will independently conduct the primary screening of the articles and data extraction, in which nonconformities will be solved by discussion with a methodologist. Data will be extracted from eligible studies, including the basic characteristics of study, the information of training set and validation set, the function and performance of AI algorithms, and summarised in a table. The risk of bias and applicability of the eligible studies will be assessed by the two reviewers independently based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Quantitative analyses, if appropriate, will also be performed.Discussion: Automating systematic review process is of great help in reducing workload in evidence-based practice. Results from this systematic review will provide essential summary of the current development of AI algorithms for automatic literature screening in medical evidence synthesis, and help to inspire further studies in this field. Registration: PROSPERO registration number CRD42020170815 (28 April 2020).

primary screening of the articles and data extraction, in which nonconformities will be solved by discussion with a methodologist. Data will be extracted from eligible studies, including the basic characteristics of study, the information of training set and validation set, the function and performance of AI algorithms, and summarised in a table. The risk of bias and applicability of the eligible studies will be assessed by the two reviewers independently based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Quantitative analyses, if appropriate, will also be performed.
Discussion: Automating systematic review process is of great help in reducing workload in evidencebased practice. Results from this systematic review will provide essential summary of the current development of AI algorithms for automatic literature screening in medical evidence synthesis, and help to inspire further studies in this eld.

Background
Systematic review synthesizes the results of multiple original publications to provide clinicians with comprehensive knowledge and current optimal evidence in answering certain research questions. Major steps of a systematic review are: de ning a structured review question, developing inclusion criteria, searching in the databases, screening for relevant studies, collecting data from relevant studies, assessing the risk of bias critically, undertaking meta-analyses where appropriate, and assessing reporting biases. [1][2][3] Systematic review aims to provide a complete, exhaustive summary of current literature relevant to a research question with an objective and transparent approach. In the light of these characteristics, systematic reviews, in particular those combining high quality evidence, which used to be at the very top of the medical evidence pyramid 4 and now become regarded as an indispensable tool for evidence viewing, 5 are widely used by reviewers in the practice of evidence-based medicine.
However, conducting systematic reviews for clinical decision making is time-consuming and labourintensive, as the reviewers are supposed to perform a thorough search to identify any literatures that may be relevant, read through all abstracts of searched literatures, and identify the potential candidates for further full-text screening. 6 For original researches, the median time from the publication to their rst inclusion in a systematic review ranged from 2.5 to 6.5 years. 7 It usually takes over a year to publish a systematic review from the time of literature search. 8 However, advances in clinical research are likely to make these evidences be out of date within several years. With the explosive increase of original research articles, reviewers have found di culty identifying most relevant evidence in time, let alone updating systematic reviews periodically. 9 Therefore, researchers are exploring automatic methods to improve the e cacy of evidence synthesis while reducing the workload on systematic reviews.
Recent progresses in computer science show a promising future that more intelligent works can be accomplished with the aid of automatic technologies, such as pattern recognition and machine learning.
Being seen as a subset of arti cial intelligence (AI), machine learning utilizes algorithms to build mathematical models based on training data in order to make predictions or decisions without being explicitly programmed. 10 Various machine learning studies have been introduced in the medical eld, such as diagnosis, prognosis, genetic analysis, and drug screening, to support clinical decision making. [11][12][13][14] When it comes to automatic methods for systematic reviews, models for automatic literature screening have been explored to reduce repetitive work and save time for reviewers. 15,16 To date, limited researches have been focused on automatic methods used for biomedical literature screening in systematic review process. Automated literature classi cation systems 17 or hybrid relevance rating models 18 were tested in speci c datasets, yet further extension of review datasets and performance improvement are required. To address this gap in knowledge, this article describes the protocol for a systematic review aiming at summarizing existing automatic methods to screen relevant biomedical literature in the systematic review process.

Objectives
The primary objective of this review is to assess the diagnostic accuracy of AI algorithms (index test) compared with gold-standard human investigators (reference standard) for screening relevant literatures from original literatures identi ed by electronic search in systematic review. The secondary objective of this review is to describe the time and work saved by AI algorithms in literature screening. Additionally, we plan to conduct subgroup analyses to explore the potential factors that associate with the accuracy of AI algorithms.

Study registration
We prepared this protocol following the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P). 19 This systematic review has been registered on PROSPERO (Registration number: CRD42020170815, 28 April 2020).

Review question
Our review question was re ned using PRISMA-DTA framework, as detailed in Table 1. In this systematic review, "literatures" refer to the subjects of the diagnostic test (the "participants" in Table 1), and "studies" refer to the studies included in our review. *The "participants" in our review refer to the original publications and literatures identi ed in a systematic literature search, rather than human participants or patients in traditional systematic reviews.

Inclusion and exclusion criteria
We will include studies in medical research that reported a structured study question, described the source of the training or validation sets, developed or employed AI models for automatic literature screening, and used the screening results from human investigators as the reference standard.
We will exclude traditional clinical studies in human participants, editorials, commentaries or other nonoriginal reports. Pure methodological studies in AI algorithms without application in evidence synthesis will be excluded as well.

Information source and search strategy
An experienced methodologist will conduct searches in three major public electronic medical and computer science databases, including PubMed, Embase and IEEE Xplore Digital Library, for publications ranged from January 2000 to present. We set this time range because to the best of our knowledge, AI algorithms prior to 2000 are unlikely to be applicable in evidence synthesis. 20 In addition to the literature search, we will also nd more relevant studies through checking the reference lists of studies identi ed by electronic search. Related abstracts and preprints will be searched in Google scholar. There are no language restrictions in searches. We will use both free text words and MeSH/EMTREE terms to develop strategies related to three major concepts: systematic review, literature screening, and AI. Multiple synonyms for each concept will be incorporated into the search. Details of the search strategies are shown in Table 2.

Concept
Search terms Systematic review #1 ("medical evidence" OR PICO OR PECODR OR "intervention arms" OR "experimental methods" OR "study design parameters" OR "Patient oriented Evidence" OR "eligibility criteria" OR "evidence based medicine" OR "clinically important elements" OR "evidence based practice" OR "results from clinical trials" OR "research results" OR "clinical evidence" OR "Meta Analysis" OR "Clinical Research" OR "medical abstracts" OR "clinical trial literature" OR "clinical trial characteristics" OR "clinical trial protocols" OR "clinical practice guidelines" OR "systematic review") Literature screening #2 (extract* OR classif* OR identif* OR retriev* OR detect* OR judg* OR determin* OR decid* OR sort* OR infer* OR interpret* OR includ* OR exclud* OR lter OR ltering OR select*) Arti cial intelligence #3 ("Arti cial Intelligence" OR "natural language" OR "language processing" OR "Knowledge Acquisition" OR "Knowledge Representation" OR "Support Vector Machine" OR svm OR Gaussian OR Bayes OR Bayesian OR "Cluster" OR Clustering OR "Hidden Markov" OR "conditional random eld" OR "Random Forest" OR (Graphical AND model) OR Regression OR "feature engineering" OR "zero-shot learning" OR "few-shot learning" OR "reinforcement learning" OR "transfer learning" OR (unsupervised OR supervised OR semi-supervised OR distant-supervised OR self-supervised) OR "neural network" OR "neural networks" OR (neural AND algorithm*) OR (neural AND machine) OR (network AND algorithm*) OR (network AND machine) OR (automatic AND network) OR (automatic AND networks) OR (automatic AND algorithm*) OR (automatic AND model) OR (automatic AND models) OR (automatic AND machine) OR (automatic AND learning) OR (automatic AND method) OR (learning AND network) OR (learning AND networks) OR (learning AND algorithm*) OR (learning AND machine) OR (learning AND method) OR (deep AND network) OR (deep AND networks) OR (deep AND algorithm*) OR (deep AND model) OR (deep AND models) OR (deep AND machine) OR (deep AND learning)) Combined concepts

Study selection
Literatures with titles and abstracts from online electronic databases will be downloaded and imported into EndNote X9.3.2 software (Thomson Reuters, Toronto, Ontario, Canada) for further process after removing duplications.
All studies will be screened independently by 2 authors based on the titles and abstracts. Those which do not meet the inclusion criteria will be excluded with speci c reasons. Disagreements will be solved by discussion with a methodologist if necessary. After the initial screening, the full texts of the potentially relevant studies will be independently reviewed by the two authors to make decisions on nal inclusions. Con icts will be resolved in the same way as they were initially screened. Excluded studies will be listed and noted according to PRISMA-DTA owchart.

Data collection
A data collection form will be used for information extraction. Data from the eligible studies will be independently extracted and veri ed by two investigators. Disagreements will be resolved through discussion and consultation with the original publication. We will also try to contact the authors to collect the missing data. If one study did not report detailed accuracy data or did not provide enough data that are essential to calculate the accuracy data, this study will be omitted from the quantitative data synthesis.
The following data will be extracted from the original studies: characteristics of study, information of training set and validation set, the function and performance of AI algorithms. The de nitions of variables in data extraction are shown in Table 3. Risk of bias assessment, applicability, and levels of evidence Two authors will independently assess risk of bias and applicability with a checklist based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). 21 The QUADAS-2 contains 4 domains, respectively regarding patient selection, index test, reference standard, and ow and timing risk of bias.
The risk of bias is classi ed as "low", "high", or "unclear". Studies with high risk of bias will be excluded in the sensitivity analysis.
In this systematic review, the "participants" are literatures rather than human subjects. The index test is AI model used for automatic literature screening. Therefore, we will slightly revise the QUADAS-2 to t our research context (Table 4). We deleted one signal question in the QUADAS-2 "was there an appropriate interval between index test and reference standard". The purpose of this signal question in the original version of the QUADAS-2 is to judge the bias caused by the change of disease status between the index test and the reference test. The "disease status", or the nal inclusion status of one literature in our research context, will not change, thus there is no such concerns.

Diagnostic accuracy measures
We will extract the data of per study in a two-by-two contingency table from the formal publication text, appendices, or by contacting the main authors to collect sensitivity, speci city, precision, negative predictive value (NPV), positive predictive value (PPV), negative likelihood ratio (NLR), positive likelihood ratio (PLR), diagnostic odds ratios (DOR), F-measure, and accuracy with 95% CI. If the outcomes cannot be formulated in a two-by-two contingency table, we will extract the reported performance data. If possible, we will also assess the area under the curve (AUC), as the two-by-two contingency table may not be available in some scenarios.

Qualitative and quantitative synthesis of results
We will qualitatively describe the application of AI in literature screening. If there were adequate details and homogeneous data for the quantitative meta-analysis, we will combine the accuracy of AI algorithms in literature screening using the random-effects Rutter-Gatsonis hierarchical summarised receiver operating characteristic curve (HSROC) model which was recommended by the Cochrane Collaboration for combining the evidence for diagnostic accuracy. 23 The effect of threshold will be incorporated in the model in which heterogeneous thresholds among different studies will be allowed. The combined point estimates of accuracy will be retrieved from the summarised receiver operating characteristic curve (ROC).
Subgroup analyses and meta-regression will be used to explore the between-study heterogeneity. We will explore the following prede ned sources of heterogeneity: (1) AI algorithm type, (2) study area of validation set (targeted speci c diseases, interventions, or a general area); (3) searched electronic databases (PubMed, EMBASE, or others), (4) proportion of eligible to original studies (the number of eligible literature identi ed in the screening step divided by the number of original literature identi ed during the electronic search). Furthermore, we will analyse the possible sources of heterogeneity from both dataset and methodological perspectives in HSROC as covariates following the recommendations from the Cochrane Handbook for Diagnostic Tests Review. 23 We regarded the factor as a source of heterogeneity if the coe cient of the covariate in the HSROC model was statistically signi cant. We will not evaluate the reporting bias (e.g. publication bias) since the hypothesis underlying the commonly used methods, such as funnel plot or Egger's test, may not be satis ed in our research context. Data were analysed using R software, version 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria) with two-tailed probability of type I error of 0.05 (α = 0.05).

Discussion
Systematic review has developed rapidly within the last decades and plays a key role in enabling the spread of evidence-based practice. Systematic review, though costing less than primary research in money expenditure, is still time-consuming and labour-intensive. Conducting systematic review begins with electronic database searching for a speci c research question, then at least two reviewers read each abstract of searched records to identify potential candidate literatures for full-text screening. Only 2.9% searched records are relevant and included in the nal synthesis on average, 24 typically, reviewers have to nd the proverbial needle in the haystack of irrelevant titles and abstracts. Computational scientists have developed various algorithms for automatic literature screening. Developing an automatic literature screening instrument will be source-saving and improve the quality of systematic review by liberating reviewers from repetitive work. In this systematic review, we aim to describe the development process and algorithms used in various AI literature screening systems, in order to build a pipeline for the update of existing tools and creation of new models.
The accuracy of automatic literature screening instruments varied widely in different algorithms and review topics. 17 The automatic literature screening systems can reach a sensitivity as high as 95%, despite at the expense of speci city, since reviewers try to include every publication relative to the topic of review. As the automatic systems may have a low speci city, it is also important to evaluate how much reviewing work the reviewers can saved in the step of screening. We will not only assess the diagnostic accuracy of AI screening algorithms compared with human investigators, but also collect the information of work saved by AI algorithms in literature screening. Additionally, we plan to conduct subgroup analyses to identify potential factors that associate with the accuracy and e cacy of AI algorithms.
As far as we know, this will be the rst systematic review to evaluate AI algorithms for automatic literature screening in evidence synthesis. Few systematic reviews have focused on the application of AI algorithms in medical practice. The literature search strategies in previous published systematic reviews rarely use speci c algorithms as search terms. Most of them generally use words such as "arti cial intelligence", "machine learning" in strategies, which may lose the studies that only reported one speci c algorithms. In order to include AI-related studies as much as possible, our search strategy contained all of the AI algorithms commonly used in the past 50 years, and it was reviewed by an expert in machine learning. The process of literature screening can be assessed under the framework of the diagnostic test. Findings from this proposed systematic review will provide comprehensive and essential summary of the application of AI algorithms for automatic literature screening in evidence synthesis. The proposed systematic review may also help to improve and promote the automatic methods in evidence synthesis in the future by locating and identifying the potential weakness in the current AI models and methods.