Design and methods used for this protocol review comply with Centre for Reviews and Dissemination (CRD’s) Guidance For Undertaking Reviews in Healthcare (39), Meta-analyses of Observational Studies in Epidemiology (MOOSE) (40) and is reported in line with Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) (1). Eligibility criteria were informed using the SPIDER (41) and MOOSE guidelines.
Eligibility criteria
(S) Sample: Adults of any age and sex with the diagnosis of schizophrenia, schizoaffective disorder, and psychosis spectrum disorders in general confirmed by a physician according to International Classification of Diseases (ICD) (6, 42, 43) or Diagnostic and Statistical Manual of Mental Disorders (DSM) (44, 45) guidelines, irrespective of the severity of disease and duration of illness. Participants with any other confirmed structural or functional neurologic disorders will be excluded.
(PI) Phenomenon of Interest: The mirror neuron system (MNS) functional integrity.
(D) Design: Observational cohort and cross-sectional studies.
(E) Evaluation: Electroencephalography (EEG), Magnetoencephalography (MEG), Transcranial magnetic stimulation (TMS), Functional magnetic resonance imaging (fMRI), Near-infrared spectroscopy (NIRS), Eye-tracking, and muscle activation (EMG).
(R) Research type: Qualitative, quantitative, and mixed-methods research can be searched for.
Information sources
The search will employ sensitive topic-based strategies designed for each database with no time frame limitations. There will be no language or geographical restrictions either. We will perform our search on the 10th of February, 2021.
Databases:
- MEDLINE through PubMed (RRID:SCR_004846)
- Embase (RRID:SCR_001650)
- Science Citation Index – Expanded (Web of Science) (RRID:SCR_017657)
- Conference Proceedings Citation Index – Science (Web of Science) (RRID:SCR_017657)
Search strategy
Our search strategies for all the databases included in our study, namely MEDLINE (through PubMed), Embase, Science Citation Index – Expanded (Web of Science), and Conference Proceedings Citation Index – Science (Web of Science) are presented in appendix A.
Study records
Data management
Records will be managed through EndNote (RRID:SCR_014001) version X9 (46); specific software for managing bibliographies.
Selection process
Two reviewers (NH and AH) will independently screen the title and abstract of identified studies for inclusion. We will link publications from the same study to avoid including data from the same study more than once. If any study cannot be clearly excluded based on its title and abstract, its full text will be reviewed. A study will be included when both reviewers independently assess it as satisfying the inclusion criteria from the full text. A third reviewer (AV) will act as arbitrator in the event of disagreement following discussion.
Data collection process
Using a standardized form, two reviewers (AR and MM) will extract the data independently. A third reviewer (AV) will independently check the data for consistency and clarity. We will attempt to extract data presented only in graphs and figures whenever possible but will include such data only if two reviewers independently obtain the same result. If studies are multi-center, then where possible we will extract data relevant to each. If necessary, we will attempt to contact study authors through an open-ended request to obtain missing information or for clarification.
Data items
Data extracted will include the following summary data: sample characteristics, sample size, type of modality used for examining MNS in participants, the task that was used for the study, founding sources, declarations of interests, results, and summary of the findings as either normal, abnormal, and mixed (indicating that different components of the data suggest different things, or that the reported results are not entirely statistically robust).
Outcomes and prioritization
Studies will be grouped according to the different modalities used, which may include Electroencephalography (EEG), Magnetoencephalography (MEG), Transcranial magnetic stimulation (TMS), Functional magnetic resonance imaging (fMRI), near-infrared spectroscopy (NIRS), Eye-tracking, and muscle activation (EMG). However, studies using general behavioral measures (e.g. imitation tasks) and studies of reaction time during automatic imitation will be excluded. In the end, we will review other approaches that did not fit in the standard categorization of tasks, and also the findings from structural MRI studies.
Risk of bias in individual studies
Two authors (AR and MM) will independently evaluate the included studies for risks of bias. We will discuss any disagreement and document our decisions, and a third author (AV) will act as arbitrator in such a case. Cohen's κ will be used to assess agreement between reviewers. All tools and processes will be piloted before use. We will use the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (47). This tool consists of fourteen questions which will address the following: 1- Research question, 2 and 3- Study population, 4- Study eligibility criteria, 5- Sample size justification, 6- Whether the exposure assessed prior to outcome measurement, 7- Wether sufficient timeframe was given to see an effect, 8- Different levels of the exposure of interest, 9- Exposure measures and assessment, 10- Repeated exposure assessment, 11- Outcome measures, 12- Blinding of outcome assessors, 13- Followup rate, and 14- Statistical analysis. There are five possible answers to each question: yes, no, cannot determine, not applicable, and not reported. Finally, there are three possible judgments for the quality rating of each study: high quality, fair quality, and low quality. A high risk of bias translates to a rating of poor quality, while a low risk of bias translates to a rating of good quality. The tool, with the authors’ judgment for a “yes” answer to each question, is presented in Appendix B.
Data synthesis
We will use R version 4 (48) as the software for our data synthesis. A meta-synthesis will be performed based on vote counting methods and results will be presented as a harvest plot. A summary of all studies included in the synthesis will also be presented. In this table we will present the following:
- Modality of the study (EEG, fMRI, etc.)
- Study ID
- Number of schizophrenia participants
- Mean age of participants (in years)
- Task: the task that was used with the modality.
- Results
- Summary: the results of each study will be summarized in terms of whether the paper provides evidence for an abnormal MNS in schizophrenia, a normal MNS, or evidence that is mixed. Mixed evidence can mean either that different components of the data suggest different things, or because the reported results are not entirely statistically robust.
Meta-bias
To evaluate the risk of reporting bias across studies, a test for funnel plot asymmetry will be conducted. This test examines whether the relationship between estimated effect size and study size is greater than chance (49). Funnel plots will be generated for visual inspection of potential publication bias. In the presence of publication bias, the plot will be symmetrical at the top, and data points will increasingly be missing from the middle to the bottom parts of the plot (50).
Confidence in cumulative evidence
The strength of the overall body of evidence will be assessed using the Confidence in Evidence from Reviews of Qualitative research method (CERQual) (51). This approach uses four components to evaluate confidence in the review findings. These include the methodological limitations of included studies, the relevance of the included studies to review questions, the coherence of the review findings, and the adequacy of the data that contributes to each review finding. In the first instance, MM will evaluate each finding using the four components of CERQual and a four-point scoring system ranging from 'no or very minor concerns' to 'substantial concerns'; AV then checks the evaluation. The review authors will meet and discuss the scores and assign each finding an overall CERQual assessment score. Each finding starts with a 'high confidence' score which could be downgraded to 'moderate confidence', 'low confidence', or 'very low confidence' if the CERQual process revealed concerns.