Surgical Navigation Systems Based On AR/VR Technologies

This study considers modern surgical navigation systems based on augmented reality technologies. Augmented reality glasses are used to construct holograms of the patient's organs from MRI and CT data, subsequently transmitted to the glasses. Thus, in addition to seeing the actual patient, the surgeon gains visualization inside the patient's body (bones, soft tissues, blood vessels, etc.). The solutions developed at Peter the Great St. Petersburg Polytechnic University allow reducing the invasiveness of the procedure and preserving healthy tissues. This also improves the navigation process, making it easier to estimate the location and size of the tumor to be removed. We describe the application of developed systems to different types of surgical operations (removal of a malignant brain tumor, removal of a cyst of the cervical spine). We consider the speci�cs of novel navigation systems designed for anesthesia, for endoscopic operations. Furthermore, we discuss the construction of novel visualization systems for ultrasound machines. Our �ndings indicate that the technologies proposed show potential for telemedicine.


Introduction
Attempting to reduce aggressive surgical procedures is a major trend in modern neurosurgery.Surgical trauma can be mitigated via minimal-access techniques and optimal surgical trajectories for approaching the pathological lesions [1].Minimally invasive approaches are now introduced in cerebral and spinal surgery, along with improving techniques for spinal implantation and xation; this allows reducing aggressive surgical interventions and shortening the treatment times considerably.Among other things, this is achieved through intraoperative navigation, which has increasingly stringent requirements imposed on its precision and accuracy [2], [3], [4].
The key problems of neuronavigation can be formulated as metaphorical questions asked by the surgeon: "Where is my target located?","What is my current location?","How can I safely approach the target?".
While modern navigation systems allow answering these questions with su cient accuracy, they also come with certain drawbacks [5], the major ones being that the images generated are 2D and the screen is located away from the patient.The skill to synchronize precise manipulations in the wound with the image on the screen outside the surgical eld can sometimes take considerable time to perfect.
Simultaneously displaying several images corresponding to different planes on the screen mostly compensates for this drawback, but this means processing a much larger amount of visual data [6], [7].
Moreover, a 3D image has to be reconstructed mentally from the images projected in prede ned planes, as well as in the plane of the diagnostic probe in order to visualize the con guration of the given anatomical landmarks.Using such an interface increases the surgeon's fatigue and distraction from the operative eld [8], [9], which undermines the goal faced by modern neuronavigation that is reducing the human error in surgery [10].
For this reason, the approach to augmented reality technologies [11], [12] employed in this study has been adapted to the speci cs of this class of surgeries.
Very few studies consider the clinical application of augmented reality technologies in surgical interventions.The results of treating salivary stones with maxillofacial surgery are given in [13], [14].A very effective navigation system was developed in [15], [16] for certain types of spinal surgery.
Existing methods for visualization of anatomical structures (ultrasound, MSCT and MRI) allow localizing malignant tumors relative to vital structures (vessels, nerves, hollow organs) before the surgery.
Intraoperative visualization is limited to electromagnetic navigation systems tracking the position of navigated surgical instruments relative to anatomical structures during the surgery so that surgeons can monitor the patient-speci c anatomical parameters.In this case however the navigation system displays either a static or an intraluminal (intracavitary) image.None of these methods allows obtaining a 3D spatial image of the operative eld.Furthermore, previous surgical interventions leading to deviations from the typical anatomy can jeopardize the safety of surgery, increasing the risk of damage to vital structures, blood vessels, and nerves.Another problem is that tissues are extremely mobile during the surgery: it is di cult to x them in a certain position, which in turn complicates the process of combining MRI and MSCT data.The slightest turn of the head or a change in the position of the spine result in a considerable displacement of tissues and organs relative to the MSCT and MRI data obtained before the surgery.
The approaches outlined in this paper are aimed at solving the above problems for different types of surgical operations using augmented reality glasses.

Results
Clinical Case 1 (surgery to remove a brain tumor) A 54-year-old patient was hospitalized with progressive glioblastoma in the left frontal lobe.
Brain MRI revealed an intracerebral tumor of the left frontal lobe with mass effect and indistinct margins; contrast uptake was moderately inhomogeneous, the part of the tumor enhanced with the contrast agent had a size of 7 x 6 x 6 cm, the adjacent structures of the brain were compressed, the median structures were displaced to the right.
As the tumor progressed, the patient developed symptoms of intracranial hypertension; due to this, the rst surgery to remove the brain tumor using mixed reality glasses was performed at the Almazov National Medical Research Centre: repeat left frontotemporal craniotomy, microsurgical removal of the growing tumor of the left frontal lobe with neurophysiological control and multimodal navigation (Medtronic StealthStation S7 + Hololens mixed reality).
Mixed reality was used for preoperative planning, intraoperative marking of the operative eld, and control of surgical aggressiveness.
After the patient was positioned, the frame was mounted; the tumor was then localized and surgical access to it was planned (Figure 1).
Using mixed reality at the planning stage allowed achieving optimal access and avoiding unnecessary tissue dissection.
After the dura mater was approached via mixed reality, the surgical site was marked and labeled to estimate the intended durotomy length.Resection radicality was constantly monitored during the surgery.Intraoperative monitoring data indicated that the tumor was completely removed, which was con rmed by the data of a postoperative brain CT.
Clinical Case 2 (surgery to remove a cyst in the cervical spine) The described approach was implemented during a surgical intervention in a female patient with a midline neck cyst.
It was decided to use the visualization system based on mixed reality glasses during the operation to track the exact boundaries and syntopic characteristics of the location of the cyst and stula; this way, the spatial relationship and localization of the cyst relative to the neck organs could be determined as accurately as possible accounting for the patient-speci c anatomy.Simultaneous intraoperative neuromonitoring of nerve integrity was performed via the Medtronic NIM-neuro 3.0 system.Because the operative eld was localized in the cervical spine, the patient's position during the surgery had to repeat the exact position during the MSCT scan to position the hologram precisely.A mask immobilizing the head and shoulder girdle in a certain position, allowing to reproduce this position during a future operation, was used for this purpose (Figure 2).
Analyzing the clinical experience accumulated using the mixed reality technology in surgery, we can conclude that this technique provided an improved visualization of the real situation within the operative eld, yielding a clearer intraoperative picture on the spatial localization of malignant neck tumors relative to vital structures (vessels, nerves, hollow organs) to complement the data obtained at the preoperative stage during ultrasound, MSCT and MRI studies.

Discussion
As we can see, the experience of using augmented reality technologies for different types of surgical interventions has been largely positive; let us now consider another type of surgery, namely, the speci cs and prospects of endoscopic operations.
Endoscopy has come a long way from an exclusively diagnostic technique with the sole task of detecting the extent of disease to an independent discipline using exible tools inserted through the body's natural ori ces to treat conditions that could previously only be solved by surgical methods.A clear bene t of exible endoscopy is its low invasiveness: the endoscopic tools are inserted into the patient's body through the natural ori ces (e.g., mouth, or anal canal); moreover, the endoscope can take the shape of the structures where the procedure is performed.The inserted part of the endoscope can combine multiple functions, serving as a camera transmitting images, a means for delivering instruments for performing manipulations, and a tube supplying gases and liquids to generate a working environment.
Natural ori ce transluminal endoscopic surgery (NOTES) performed in cavities and organs accessed through the mucous membranes of the oral cavity, esophagus, rectum seems to be a promising direction in exible endoscopy.Transesophageal access to the mediastinum is an advantageous technique that is fairly well understood by now thanks to tunnel endoscopic surgeries in the esophagus (POEM, STER), minimizing the access trauma and delivering the necessary tools to the operative eld within minutes from the start of the surgery.The issue of navigation remains unresolved because minimizing access implies that the tunnel formed should lead precisely to the area of interest (the hit-to-kill principle); otherwise, the operating surgeon is forced to spend a large amount of time searching for the target mediastinum site, which may carry an increased risk of complications.Since intraoperative navigation remains a challenge at present, the level of the tunnel and the direction in which it is formed are chosen intuitively, based on the experience of the operating surgeon and the data of presurgical examination.Direct intraoperative navigation, where not only the area of interest but also the mediastinal structures would be displayed on the endoscopic surgeon's screen (Figure 3), can greatly simplify the surgery, speed it up, make it safer, and, accordingly, improve its reproducibility, making it a standard surgical procedure accessible to many surgeons.
The augmented reality technologies described also have promising applications in anesthesiology.Being able to see vessels through the glasses and visualize the position of the syringe needle by ultrasound data makes it possible to build a completely new technology for these types of surgeries.
Finally, combining augmented reality technologies with telemedicine technologies means that operations can be performed remotely by surgical team members from different locations.Surgeons outside the OR can detect/outline the risk zones directly on their screens, transmitting this information to the operating surgeon's glasses.

Methods
Ethics approval and consent to participate experimental methods were performed in accordance with the relevant international and national guidelines and regulations.All medical practices followed the Declaration of Helsinki on medical research involving human subjects.The study was approved by the Local Ethics Committee at Pavlov First St. Petersburg State Medical University and the Ethics Committee of the Almazov National Medical Research Centre.All subjects gave informed consent to participate in the study and to publish the data and images in an online open-access publication.

3D model of the hologram and adjacent anatomical structures
There are two main approaches to visualizing MRI and MSCT images: the rst one is based on voxel raster graphics using volume rendering and the second one is vector-based, segmenting and generating polygonal models using polygonal rendering.Volume rendering produces a 3D model with automated boundary detection and color mapping based on the density of anatomical structures.In addition, ne details (smaller than one pixel) can be rendered with this approach.Despite these advantages of voxel visualization, a method based on polygonal models was used for the surgery to remove a cyst.The reason for this is that all images are processed directly by the augmented reality glasses, and, as voxel rendering is a very resource-intensive process [17], the Hololens 2 glasses used cannot process such data su ciently fast, with the frame rate dropping to 5-10 fps.On the other hand, 3D models are rendered at 60 fps, which is extremely important for mixed reality visualization.
To use the glasses during the surgery, we developed software that not only loads and positions the 3D model relative to the marker but also offers an additional user interface for intraoperative interaction [18], [19].Because the Hololens 2 glasses recognize multiple gestures and nger tracking, additional parameters can be set up for the visualization of the hologram.Notably, thanks to the interface based on gestures and virtual buttons, the surgeon can use the extensive range of functions incorporated in the glasses without touching non-sterile objects.
Furthermore, the Hololens 2 glasses also provide Spatial Mapping [20], making it possible to x virtual objects in a certain position in space based on analysis of data from a variety of sensors built into the glasses.This feature is especially vital when the marker disappears from the camera's eld of view: then the glasses switch to Spatial Mapping to position the hologram as the user moves.

Automated navigation system based on the HTC VIVE virtual reality trackers
This solution additional system for detecting and tracking the position of the observer (used by holographic glasses) and the patient relative to the operating room.With this approach, there is no need to use any auxiliary frames and systems at the pre-operative stage, so that all adjustments can be carried out by quick calibration immediately before the surgery.
The HTC Vive positioning system used for implementing this approach is equipped with two trackers, one attached to the glasses, and the other to the calibration pointer.The general principle is that the HTC Vive trackers locate the position of the patient and the glasses relative to the operating room at the calibration stage, and the resulting global coordinate system is aligned with the local coordinate system of the glasses.The key element of this approach is the calibration process itself as it determines the accuracy with which these two coordinate systems can be aligned and the actual positioning accuracy for tracking the observer and the patient.
The calibration is broken down into several stages.The rst stage consists in nding the difference between the coordinate systems of the glasses and HTC Vive.Next, the systems are aligned using a transformation matrix.This is achieved by using a QR code to visualize the coordinate axes of the glasses in the operating room space and setting the offsets for the X and Z axes using the pointer to which the HTC Vive tracker is attached.After that, the coordinate system of the glasses is fully synchronized with the HTC Vive system.At the second stage, the position of the patient's head relative to the operating room is detected.A point cloud is generated for this purpose using a pointer with the HTC Vive tracker.This is done by moving the pointer over the patient's head, in particular, along forehead and bridge of the nose.Next, this point cloud is automatically correlated with the head model obtained from CT, and this model is 'moved' to the location where this point cloud was collected, namely, the location where the patient's head is positioned.
The HTC Vive tracking system has a number of advantages over existing optical navigation systems.In particular, the size of the area where the position of the sensors can be tracked is 6x6 meters, which covers almost the entire operating room, making it possible to position the base stations so that they do not interfere with other equipment.The system can work with 4 base stations; tracking is possible even if the sensor is within the eld of view of only one of the stations.In conclusion, the optical tracking system of the HTC Vive is the most affordable on the market and this can signi cantly reduce the market value of the nished product.

Conclusions
Surgical navigation systems generating virtual 3D images of the operative eld combined with the intraoperative picture can revolutionize the concept of preoperative planning and surgical navigation, prompting a shift towards smart and intuitive assistance.Additionally, the quality of surgical treatment can be improved, preparing the grounds for extremely sophisticated, often unparalleled surgical interventions.
We believe that integrating such systems into everyday surgical practices will have not produce positive clinical and economic outcomes but will be greatly bene cial for the emotional wellbeing of both the recovered patient and the surgeon providing the medical care.

Figures
Figures

Figure 1 Image
Figure 1