This study evaluates the feasibility and effectiveness of this approach, aiming to identify the advantages and disadvantages of online-CP from the medical students’ perspectives.
This study was approved by the Ethics Committee of Chiba University (Approval No. 3425). The study database was anonymized, and the study complied with the requirements of the Japanese Ministry of Health, Labour and Welfare.
Forty-three medical students participated in the online-CP. A group of 10 or 11 medical students underwent a four-week training program as members of a medical team of doctors and residents until February 28, 2020. Between May and July 2020, a four-week online-CP was conducted in the RU and GM departments. Hybrid CCs at the hospital resumed after July 2020, consisting of online-CP and in-person CCs. We implemented several faculty developments in advance for the supervising physicians and standardized the course content.
Informed consent to use the participants’ performance and their questionnaire responses was obtained and documented on the Web.
Online learning procedure
Department’s CCs and online-CPs
An overview of online-CPs and CCs in each department is shown in Figure 1. Over the course of four weeks, online-CPs using sEHR were conducted in the RU, and online-CPs using e-PBL and online-VMI were conducted in the GM.
Three cases each of respiratory medicine (i.e., complications associated with infectious pneumonia and acute exacerbation of COPD, pulmonary thromboembolism, and chemotherapy for lung cancer) and thoracic surgery (i.e., lung cancer, mediastinal tumor, and pneumothorax) were selected as shown in Figure 2. Information from medical professionals’ electronic health records (EHR), image findings, and examination results was added to the Excel worksheets while personal data were excluded. The clinical course for each case ranged from seven to fifty-five days and was divided into five segments based on each case. Each simulated case record was arranged to address the course’s learning purpose and the five-day practice.
As reflected in Supplementary figure 1, the participants attended an orientation using an online meeting. In the RU, one case per week was shared by two to three students, and one surgical case per week was assigned to each student. Every morning, an sEHR, according to the practice day, was uploaded to the LMS; the students reviewed the information and made an assessment based on the results of various examinations pertaining to the case. They also took into consideration medical interviews and physical examinations. Online meetings were subsequently held with group members and the supervising physician. After sharing the results for the tasks assigned the previous day, the meeting continued as follows: 1) the student summarized the clinical course of the case; 2) the student was asked to list the necessary questions and examinations required, and the supervising physician answered with the findings for those items; 3) an analysis was conducted of the examinations performed; and 4) the student discussed the treatment plan. To conclude, students were asked to decide the charge for tasks related to the case. Each online meeting ranged from 30–60 minutes. The students made a medical record based on the sEHR and the meeting, and uploaded the record onto the LMS by evening. The supervising physician added feedback on the submitted records. The meetings and feedback were supervised by four of the authors of the study (HK, YT, GS, AK).
The e-PBL was based on outpatient cases that involve biological, psychological, and social issues and was conducted through the LMS (Figure 2, Supplementary figure 2). Twenty outpatient cases were selected from actual cases experienced in the GM. One case per week was shared by two or three students. Additional medical history, physical examination findings, and laboratory findings were shared with students daily. Students then discussed the tasks in small groups. Each group reviewed the case, key clinical information that formed the basis of the differential diagnosis, and additional clinical information to identify potential diseases. They posted weekly assignments on a discussion board on the LMS, in which students were asked to formulate clinical questions and post evidence for them. Participants were encouraged to view cases from other groups on the LMS and actively participate in discussions.
To ensure that the appropriate history was collated, and physical examinations were conducted as needed, it was necessary to practice conducting medical interviews. In these sessions, a faculty member acted as a patient and a student played the role of a doctor during a medical interview that took the form of an online meeting (Supplementary Figure 3). The supervising physician provided clinical information to the medical student. During the physical examination, along with the oral presentation, students presented their findings with photographs and videos to make the interview more realistic. A number of students from the same group could observe the interview, thereby promoting peer reviews. By using the chat function in the video conference system, students could share the clinical reasoning process. Furthermore, they were able to have small group discussions regarding the clinical reasoning process by using a breakout room. This activity was conducted three times a week for an hour per session. Twelve cases were used that involved common diseases encountered in primary care settings.
Quantitative data collection
Before and at the end of the online-CP for each department, participants completed a questionnaire (Table 1). Clinical skills competence constituted nine items, including seven items related to the mini-clinical evaluation exercise (Mini-CEX) and the ability to write medical records and a summary of the case.
Qualitative data collection
At the end of the online-CP, students were interviewed during semi-structured focus groups regarding the advantages and disadvantages of the program, with this qualitative phase used to help explain quantitative data results. Eight groups of medical students (43 participants in total) achieved data saturation. Criteria for participant selection specified that all medical students were to be included, as the target population had to be homogeneous in order to investigate perceptions regarding online-CP. Students were recruited to participate in focus group sessions after the online-CP ended. One of the students was unable to participate in the online focus group due to network issues.
Focus group interviews (FGIs) were conducted by two physician-researchers (HK and KS) over intervention or controlled teaching sessions and recorded independently using an iteratively created interview guide. Students were asked the following questions: 1) ‘Think of the advantages of online CP and why do you believe that these are advantages?’; 2)‘Think of the disadvantages of online-CP and why do you believe that these are disadvantages?’ (Supplementary Table 1). The interview guide was validated by two researchers (HK and KS) prior to data collection.
The interviews took no longer than thirty minutes and considered work impact and participant fatigue. A Zoom™ video recording system was used to record the interview with participant permission. The interviews were transcribed verbatim.
Quantitative data was expressed in terms of the mean ± standard deviation (SD), unless otherwise indicated. The Wilcoxon signed-rank test was used to compare parameters before and after the online course. Pearson’s chi-square test was used to compare self-study time during the online-CP, CC, and period during which the CC was postponed. A p-value <0.05 was used. All statistical analyses were performed using JMP 13.0 (Cary, North Carolina, USA).
Qualitative content analysis
In line with previous studies, qualitative content analysis was used to analyze FGI transcripts (6). This analysis comprises descriptions of the manifest content and interpretations of the latent content (7). HK and KS independently read and coded all transcripts. Subsequently, they discussed, identified, and agreed on the coding of the descriptors. Interrater reliability was measured using the Kappa coefficient (8) [0.8–1.0 = almost perfect; 0.6–0.8 = substantial; 0.4–0.6 = moderate; 0.2–0.4 = fair].