Background of Research
Modern Operation Room (OR) protocols mandate medical personnel manually check for the presence of all surgical tools before an operation, after the operation, and following the transfer of the surgical tools to sterilisation areas, and that these are intact, and usable (1). While this reduces preventable errors like the use of defective surgical instruments or retained surgical instruments (2), which may result in consequences ranging from financial costs to the loss of human lives (3), this method surgical tool identification and accounting is time-consuming. Differences in the workflow of different OR teams in different hospitals further reduces inter-organisational effectiveness.
The duration of intervals between operations could be reduced with the development of a surgical tool recognition system that eliminates the need for human intervention, as multitudes of other OR-related procedures, such as automated clinical assistance, and staff assignment, have been shown to benefit from surgical workflow automation (4).
Prior studies have utilised Computer Vision and Machine Learning (5) to account for surgical tools, with varying degrees of success. Compared to traditional Computer Vision techniques, such as edge detection, corner detection, and template matching, Deep Learning achieves greater accuracy in object detection. The large amount of data on surgical tools we can gather from taking photos in a controlled lab setting allows Deep Learning models to be trained on a vast bank of data, significantly improving accuracy of detections relative to those that are hard coded into Computer Vision algorithms. Deep Learning also provides increased flexibility, especially when minor changes are made to the dataset to be trained to fine-tune the object detection model (6). This allows for greater ease of use and efficiency when it comes to improving the entire object detection system.
Consequently, this study envisions a surgical tool-checking system modelled using Deep Learning. We foresee an eventual system where, at various checkpoints of transport and checking, Deep Learning models integrated into cameras and 3D scanners simultaneously account for surgical tools and check for defects or distortion. This streamlines the entire process, reducing wait times and enabling more operations to take place, whilst ensuring the accuracy and reliability of checking surgical tools before and after surgery.
Hypothesis of research
We hypothesise that a system equipped with Deep Learning capabilities will increase the efficiency of accurate checks of surgical tools done before and after surgery, while maintaining an acceptable level of confidence in the detection model.