In our study, we demonstrated an example of organizing an online journal club with possible results of its effectiveness. Although there are several clubs in the Russian Federation, including online ones but only the Higher School of Oncology reported their own results [9].
The choice in favor of a closed format was made during the organization of the club. It was assumed that, the selection would unite the participants, increase motivation and the value of a place in the club. In the literature, such model of organization has never been found before.
The feature of our journal club was an extensive list of inclusion criteria. In most articles, separate cohorts of students are identified: students, residents, health practitioners [3, 4, 16–18]. We recruited 4th year and older students, residents, medical teachers and doctors to the club. At the same time, students were the most interested in the selection, which is reflected by their predominance in the sample. This heterogeneity of the group has its advantages. Students and residents can interact with medical practitioners, ask questions and learn about the application of methods from articles. Physicians, in turn, can practice teaching and mentoring skills, and can also help students at the beginning of the scientific path.
There are 25 participants in our club. We found the largest number of members in the study by Wenke et al, which included 126 clinicians. But this study combined the experience of 9 journal clubs [4]. Most articles show groups of 20–30 participants, which is comparable to our number [1, 17–19].
We thought that the choice of frequency, date and time of meetings would demonstrate the maximum attendance. But in the end, it remained within 50%, which conformed to the stated goals of continuing the work of the journal club. We did not have a comparison in attendance with the control, but other studies report that it was higher in groups where meetings were well organized [1, 16]. The disadvantage of these publications is their date of publication. As in the study by Linzer M. (1987), our meetings were quite regular (up to 2–3 times a month) [1]. We consider that this leads to a better of knowledge and practice of skills. This is confirmed by self-reports and voting results of the participants in the journal club. However, other studies report that a fixed date contributes to the best efficiency of the JC [4, 10, 11].
To evaluate the effectiveness, we used tests. Unfortunately, we had to adapt questions, because in our opinion (based on meetings), participants are not yet sufficiently prepared to solve full-fledged tests. During the 2nd sample we found improvements in the results, which showed insignificance after adjustment potential confounders. Then there were no statistically significant differences in the 3rd sample. We offer several explanations: 1) tests, which we modified, are not valid measures of effectiveness; 2) with each new sample participants are more interested and already better acquainted with the basic concepts of evidence-based medicine. Not all articles indicate that an objective assessment of participants' knowledge is carried out. Therewith, results of publications on testing remain contradictive [1, 3, 4, 8, 16–19]. However, only 1 meta-analysis conducted an objective assessment of knowledge, where no significant difference was obtained (SMD 0.15, 95% CI [-0.09; 0,39], p = 0.22) using different tests [5]. In the future, we plan to check the effectiveness of our journal club with valid methods, without modifications or corrections.
All publications use self-report questionnaires, the results of which demonstrate an increase in the pleasure of participation, in critical analysis of the literature, a more careful use of the results from articles, also there is an increase in the time and amount of reading sources [1, 3, 4, 6, 8, 19]. This trend is also observed in our study. But we think that these are only subjective criteria, which should not be fully based on when evaluating the effectiveness of the journal club. Linzer M. additionally noted that there is no association between self-reports and objective results [1].
A specific issue is the validity of testing. There is still no clear standard for assessing the knowledge of evidence-based medicine and statistics [2]. There are many methods proposed by different authors that still need to be tested on various cohorts [13, 15, 20–22].
In our work, free-form comments are very helpful, as it allows us to draw attention to the advantages and disadvantages of the organization. We try to adapt to most of them and try to improve the journal club.
Our study has several limitations. First, the closed and online format of the organization can lead to a decrease in the involvement of participants in the work of the club and attempts to join it. Second, a heterogeneous sample, consisting mainly of medical students, does not allow us to fully understand how this training can change their future work (potential selection bias). Third, the use of various modified tests reduces their validity and does not allow drawing accurate conclusions about the presence or absence of the effectiveness of the journal club. It also does not make it possible to draw accurate conclusions about the dynamics of learning, since the tests have different levels of difficulty. Fourth, low attendance can contribute to both overstating and understating of objective assessment due to inadequate assimilation of knowledge (additional potential confounder). Fifth, the use of self-report questionnaires can overrate assumptions about learning (sampling, response, non-response, extreme responding bias), which can subsequently lead to disappointment about this method. Sixth, there is no complete control group, which is due to the impossibility of monitoring in the online format those people, who were tested but did not get into the club. This fact may also reflect incorrectly on test results, as some people from the control group may be trained in other journal clubs or learn epidemiology and statistics on their own (selection, allocation bias). Seventh, a small sample size, which can significantly reduce the power of the study and lead to false negative results (type II error). However, it is very difficult to find a balance in the number of participants, since a too large cohort can lead to poorer learning of the material, conflicts, lack of an individual approach to each student and difficulties in organizing discussion sessions [23]. Eighth, it is possible that in the process of data collection or statistical analysis, there were unrecorded and unobserved covariates that could lead to negative results (potential confounding bias).