A change of dental specialty virtual treatment planning collaboration via an accurate, reproducible, and safe 3D image through digital analytics, meanwhile, requires innovation, particularly given the speed at which CBCT technology has undergone rapid evolution, and disruption are taking place. In orthodontics, 3D imaging improves diagnosis and treatment planning in a wide variety of cases, especially in orthognathic surgery and dentofacial deformities with substantial evidence for accuracy (17–20).
Nevertheless, landmark identification in 3D images is not an easy task. This may lead to inter-and intra-examiner errors. However, when associated images from multiplanar views were utilized in conjunction with 3D models, the precision of landmark localization improved(21–22). The exact location of the mid-sella point also can be more accurately and readily identified on the 3D image due to allowance of the simultaneous visualization of all three planes(19). Additionally, other landmarks in the midsagittal plane were more easily identified due to similarities with the 2D lateral cephalogram(23). The landmark gnathion (Gn), on the other hand, still showed low intra-examiner correlation with respect to the Z-axis (sagittal plane). According to Baumrind and Frantz(24), one probable explanation is that reference points located on a prominence or curvature present higher variability compared with the landmarks at defined and plane positions(25). However, the scatter plots still revealed a normal distribution of all cephalometric landmarks made on the 3D images, as the points did not deviate too greatly from one subject to another. More importantly, our inter-examiner and intra-examiner reliability levels came within acceptable limits.
The study by Bholsithi et al.(26) focused on Thai subjects’ linear and angular measurement norms in 2D and 3D cephalometric analysis, but a 3D template for the Thai population was yet to be established. Hence, from the mean coordinates of 21 commonly used cephalometric landmarks with gender dimorphism (p < 0.05), we have proposed two cephalometric templates for both Thai male and female adults.
To use these templates, a 3D reconstruction of the patient's skull needs to be created and converted from a .dcm to a .stl file. Several free open source software that have this capability are available, and of them, 3D Slicer is recommended. 3D Slicer is compatible with any computer operating on Windows, Mac, or Linux systems released within the last five years, although older systems may be able to operate the software depending mainly on the graphics capabilities. The CBCT data in .dcm format can be processed in the ‘Segment Editor’ module to produce a reconstructed 3D model of the skull that can be exported as a .stl file.
Strengths
The strength of this study is that it creates a cutting-edge cephalometric norm in the form of virtual reality from the freeware available. Therefore, this method can be practically and economically repeated to create cephalometric norms for any racial group. This innovation also paves way to the futuristic orthodontic consultation. The 3D cephalometric template can be superimposed on the 3D model of the skull using the freeware 3D Builder. This software would enable any clinicians to examine the dysmorphology of facial structures without requiring to plot landmarks beforehand.
Moreover, there are numerous technological devices available today that can be utilized for viewing 3D models, including mixed reality (MR) headsets, Xbox consoles, personal computers, smartphones, or tablets. With the aid of technological advancements, various medical specialties would be able to codevelop better and easier diagnoses and treatment plans.
In detail, MR is the merging of the real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. MR does not exclusively take place in either the physical world or virtual world, but is a hybrid of augmented reality and virtual reality. An exciting application would be the incorporation of MR headsets like the Microsoft HoloLens and smart-glasses into patient examination to aid in orthodontic diagnosis and treatment planning. For instance, the 3D cephalometric template of the population norm could be conveniently displayed together with the patient’s 3D skull CBCT model, while overlaying precise visual guides for the clinicians. MR may open up a vast array of possibilities for enhancing the explanation of the virtual plan not only for patients but also for virtual planning among different medical and dental specialties. Additionally, at this stage of pandemic, MR combining these cephalometric templates with patients’ reality enhances the teledentistry consultation. This user-friendly virtual media can also be used with metaverse. It can be used in a virtual meeting by many dental specialties, patient(s) with guardians, partners, or relatives in the very near future.
Limitations
At the time of writing, there are currently no other 3D cephalometric templates available for the Asian population. The templates developed from our study are most effective when applied to Thai adults, therefore caution should be taken if these templates are applied to patients in other regions, although Cambodian and Vietnamese skull measurements were found to be highly similar to those of Thais(27–29).
As the sample size in this study was rather limited, additional research with a more extensive sample size should be conducted to better represent the diversity in the population. Another required aspect would be the development of an easily accessible and user-friendly computer platform for the use of this 3D craniofacial analysis.
Moreover, the future development of 3D cephalometric templates may serve as a bridge for future studies and analyses that move beyond linear and angular measurements, especially when all these limitations are considered. Finally, the rapid growth in imaging technology should be taken advantage of by reconsidering advanced methods of skeletal analysis, specifically with the use of AI and machine learning (30).