Using the current evaluation criteria, there was homogeneity in the evaluation results among the five judges in both the video review and in-person contests. In both contests, judges tended to give higher or lower evaluations scores depending on the individual evaluation criteria. Some judges tended to give "severe' evaluation scores while some others tended to give "gentle" scores evaluations. However, the difference in scoring tendencies did not affect the final relative ranking of the participants. Two judges participated in both the video review and the in-person review contests, but neither of these judges who were analyzed as assigning "harsh" or "gentle" scoring patterns. There was also no statistically significant difference between the scoring results for the video review contest and the in-person review contest. For our study, when an expert is in charge of the judging and is provided with the task content and scoring table, results indicate the possibility of similar and objective evaluation scores using either video review or in-person review.
The impact of COVID-19 has created a need for social distancing. This has affected the simulation educational environment. In other words, it has become difficult for many trainees to assemble in-person, practice a simulation and subsequently be evaluated on their performance. Of course, the purpose of simulation training is not competition in surgical procedure contests, but to motivate trainees to improve their skills and to appropriately deliver performance feedback back to them. To achieve this, we conducted this study to confirm that objective and equal evaluation is not affected by differences in judges, video observation, or in-person observation. Results from this study show that it is possible to objectively evaluate the technique of microsurgery using a web system because the details of the technique can be projected on a monitor. This is an advantage of microsurgical education, since it is easy to record detailed techniques. After both contests conducted for this study, contestants were given the opportunity to discuss the strengths and weaknesses of their techniques with the judges and commented to the study organizers that their participation was very meaningful.
Simulation Materials Needed for Microsurgical Technique Evaluation
In this contest, we used artificial blood vessels. Artificial blood vessels were selected based on the following factors: availability of the same simulation material for all participants, cost, safety, animal welfare, ease of access, and similarity to actual surgery. There are many reports on the practice of vascular anastomosis using arteries from chicken wings[1, 6], and the similarity to actual human blood vessels has been discussed. However, for this study, artificial blood vessels were select because they could serve as a better guarantee of identity than a chicken wing artery. However, if artificial blood vessels are not available, chicken wing arteries are a feasible alternative. A 5-minute time limit for each participant was set to allow time for scoring after the observation of the actual procedure, as well as provide sufficient time for participant change and preparation. The in-person contest with six participants took more than three hours from the start to the end of the contest, including two hours of participant discussion with judges Obviously, more participants will require more time, accordingly. If chicken wing arteries are used, drying of the wing blood vessels is required in the preparation, which will require more time for the contest. In addition to artificial blood vessels and chicken wing arteries, other simulation training that reported[7, 11, 15, 22] using placenta and vascular models created by 3D printers. These models have higher similarity to the actual surgical field than use of artificial blood vessels[10, 12, 22]. It has been suggested that these alternatives may contribute to the improvement of surgical techniques, but identical items are difficult to prepare, thus limiting their use in objective evaluation programs. Furthermore, they are not readily available.
In order to investigate and compare the educational effects of simulation-based training models, McGaghie, et al.[8] proposed a translational outcome effect classification for simulation-based training models. This is a five-level classification of effectiveness, ranging from trainee satisfaction (level of effectibeness1) to patient outcomes (level of effectibeness4), cost reduction, and skill improvement (level of effectibeness5). Skill assessment with simulator tools is categorized per their level of effectiveness. A recent review of 108 articles on neurosurgical simulation training reported[12] that there were 15 models at level 2 and only six models above level 2. In other words, most of the papers[12] on simulation training are about useful methods for improving skills, but not about objective evaluation of skills. The objective evaluation method for microsurgical skills used in this study is McGaghie's classification level 2, but previous reports that fall into this classification have used 3D model creations and cadavers[4]. Both of these preparations require the time and costs for model creation. Patel, et al.[12] pointed out the need for a cost effective training simulator to continue simulation training. Our method is a model that can be practiced with an artificial blood vessel or a winged blood vessel and has the potential to be a cost effective method of technical evaluation when used in conjunction with a web-based screening method.
Limitation
Due to the small number of participants, multi-factorial statistical analysis could not be performed. We requested expert reviewers from all over Japan, but not all facilities have experts, and some facilities do not have people who can conduct evaluations. The use of an artificial vessel model allows for a more basic setup at a lower cost. However, it limits the ability to evaluate completion due to the lack of active blood flow to evaluate anastomotic patency.
Ultimately, it is necessary to be able to objectively evaluate the relationship between actual improvement in procedure skill and subsequently with patient prognosis. In other words, we would like to investigate the correlation between evaluation results from simulation training and patient prognosis.