Ethics approval and consent to participate
All experimental methods were performed in accordance with the relevant international and national guidelines and regulations. All medical practices followed the Declaration of Helsinki on medical research involving human subjects. The study was approved by the Local Ethics Committee at Pavlov First St. Petersburg State Medical University and the Ethics Committee of the Almazov National Medical Research Centre. All subjects gave informed consent to participate in the study and to publish the data and images in an online open-access publication.
3D model of the hologram and adjacent anatomical structures
There are two main approaches to visualizing MRI and MSCT images: the first one is based on voxel raster graphics using volume rendering and the second one is vector-based, segmenting and generating polygonal models using polygonal rendering. Volume rendering produces a 3D model with automated boundary detection and color mapping based on the density of anatomical structures. In addition, fine details (smaller than one pixel) can be rendered with this approach. Despite these advantages of voxel visualization, a method based on polygonal models was used for the surgery to remove a cyst. The reason for this is that all images are processed directly by the augmented reality glasses, and, as voxel rendering is a very resource-intensive process [17], the Hololens 2 glasses used cannot process such data sufficiently fast, with the frame rate dropping to 5–10 fps. On the other hand, 3D models are rendered at 60 fps, which is extremely important for mixed reality visualization.
To use the glasses during the surgery, we developed software that not only loads and positions the 3D model relative to the marker but also offers an additional user interface for intraoperative interaction [18], [19]. Because the Hololens 2 glasses recognize multiple gestures and finger tracking, additional parameters can be set up for the visualization of the hologram. Notably, thanks to the interface based on gestures and virtual buttons, the surgeon can use the extensive range of functions incorporated in the glasses without touching non-sterile objects.
Furthermore, the Hololens 2 glasses also provide Spatial Mapping [20], making it possible to fix virtual objects in a certain position in space based on analysis of data from a variety of sensors built into the glasses. This feature is especially vital when the marker disappears from the camera's field of view: then the glasses switch to Spatial Mapping to position the hologram as the user moves.
Automated navigation system based on the HTC VIVE virtual reality trackers
This solution incorporates an additional system for detecting and tracking the position of the observer (used by holographic glasses) and the patient relative to the operating room. With this approach, there is no need to use any auxiliary frames and systems at the pre-operative stage, so that all adjustments can be carried out by quick calibration immediately before the surgery.
The HTC Vive positioning system used for implementing this approach is equipped with two trackers, one attached to the glasses, and the other to the calibration pointer. The general principle is that the HTC Vive trackers locate the position of the patient and the glasses relative to the operating room at the calibration stage, and the resulting global coordinate system is aligned with the local coordinate system of the glasses. The key element of this approach is the calibration process itself as it determines the accuracy with which these two coordinate systems can be aligned and the actual positioning accuracy for tracking the observer and the patient.
The calibration is broken down into several stages. The first stage consists in finding the difference between the coordinate systems of the glasses and HTC Vive. Next, the systems are aligned using a transformation matrix. This is achieved by using a QR code to visualize the coordinate axes of the glasses in the operating room space and setting the offsets for the X and Z axes using the pointer to which the HTC Vive tracker is attached. After that, the coordinate system of the glasses is fully synchronized with the HTC Vive system. At the second stage, the position of the patient's head relative to the operating room is detected. A point cloud is generated for this purpose using a pointer with the HTC Vive tracker. This is done by moving the pointer over the patient's head, in particular, along the forehead and bridge of the nose. Next, this point cloud is automatically correlated with the head model obtained from CT, and this model is 'moved' to the location where this point cloud was collected, namely, the location where the patient's head is positioned.
The HTC Vive tracking system has a number of advantages over existing optical navigation systems. In particular, the size of the area where the position of the sensors can be tracked is 6x6 meters, which covers almost the entire operating room, making it possible to position the base stations so that they do not interfere with other equipment. The system can work with 4 base stations; tracking is possible even if the sensor is within the field of view of only one of the stations. In conclusion, the optical tracking system of the HTC Vive is the most affordable on the market and this can significantly reduce the market value of the finished product.