Subjects
The structural sMRI data used in our study were acquired from the Soochow University, which is composed of 68 adolescents. The study was approved by the Ethics Committee of the Third Affiliated Hospital of Soochow University. Written informed consents was obtained from all subjects. Each subject was interviewed by a psychologist to rule out any mental or neurological diseases. No subjects had received stimulant or hypnotics before. All participants' vision was normal or corrected to normal, and they were right-handed. After the test, each participant will receive a small gift or financial reward. All subjects are required to perform Rosenberg Self-esteem Scale (RSES) test. The RSES is originally developed by Rosenberg in 1965 to assess the overall feelings of adolescents about self-worth and self-acceptance. It is the most used self-esteem measurement tool in the psychology community [15]. We ranked the RSES test scores from highest to lowest, and then divided them into two groups: high self-esteem group and low self-esteem group. Table 4 provides detailed information of all participants.
Table 4. Demographic information of all subjects
|
High self-esteem group
|
Low self-esteem group
|
p value
|
Subjects
|
34
|
34
|
|
Male/Female
|
19/15
|
16/18
|
0.83
|
Age (meanSD)
|
21.901.16
|
22.531.42
|
0.77
|
Rosenberg Scale (meanSD)
|
25.350.81
|
17.863.35
|
<0.001
|
The p-value of gender was obtained by chi-squared test.
The p-values of age and Rosenberg scale were obtained by t-test
Significance level was set to 0.05
Imaging acquisition and preprocessing
All images were collected on a 3T Siemens Medical Systems equipment. The acquisition parameters are set as: echo time (TE) = 2.98 ms, repetition time (TR) = 2300 ms, flip angle (FA) = 9 deg, voxel size = 1 × 1 × 1 mm3, slice thickness = 1 mm, field of view (FoV) = 256mm.
We use an automatic pipeline for sMRI image processing. Firstly, we adjusted the image orientation (axial, coronal, and sagittal) to match the template image, and performed offset field correction to remove the gray-scale unevenness of the image [19]. Secondly, the brain was extracted by removing the skull and cerebellum [20]. Thirdly, gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) were segmented from the background [21]. Fourth, the segmented image was registered to the template labeled with the Automated Anatomical Labeling (AAL) template [22]. Fifth, in order to calculate the morphological features based on the cortex, the middle layer of the cerebral cortex was constructed [23]. After the whole processing, the morphological measurements of GM volume, WM volume, CSF volume, cortical thickness, and cortical surface area of each ROI were obtained for each subject. It should be noted that we removed 12 subcortical ROIs from AAL template considering that the cerebral cortex contains more neurons.
Classification framework
The framework of the proposed classification algorithm based on multi-resolution ROI brain network is shown in Fig. 5, mainly including multiple anatomical network construction, feature selection, and classification. Multi-resolution ROI based multiple anatomical brain network were constructed based on morphological features (volume of different brain tissue, cortical thickness, and cortical surface area). Feature selection can reduce the dimensionality of high-dimensional brain network features, only retaining the features that can maximize the specificity of the subjects. The optimal feature subset can be trained by the classifier as neuroimaging markers representing different self-esteem levels.
Construction of multiple anatomical networks
Through the above image processing steps, GM volume, WM volume, CSF volume, cortical thickness, and cortical surface area of each ROI can be obtained from the MRI image of each subject. In order to reduce individual differences, standardization was performed, dividing the measured value of each ROI by the total intracranial volume, mean cortical thickness, and whole cerebral cortical surface area of the subject. Therefore, we used normalized volume and cortical features to provide a more appropriate representation. More objective measurements can be received by such processing. In order to improve the performance of the classifier, we propose a four-layer hierarchical network framework in this paper. We used brain templates with different ROI resolution in each layer to construct brain network nodes and edges.
Specifically, the bottommost template containing 78 ROIs is defined as , the remaining three layers are defined as , where . A larger value indicates a higher-resolution ROI, which is located in the brain network layer closer to the bottom of the hierarchy. By merging small brain regions into large brain functional areas, the number of ROIs are reduced. In the layer , there are 36 ROIs by dividing the whole brain into lateral, medial and inferior surfaces. In the layer , 14 ROIs are defined reefing to the anatomical brain structure of central, frontal, parietal, occipital, temporal, limbic, and insula lobe. The specific definition rules of these ROIs can be found in Table 5. It is worth noting that in the first layer , we study the brain as a whole.
Table 5. Regions of interest (ROIs) defined in the automated anatomical labeling (AAL) template.
Network 2
|
Network 3
|
Network 4
|
No.
|
Name of ROI
|
No.
|
Name of ROI
|
No.
|
Name of ROI
|
1, 2
|
Central region
|
1, 2
|
Central region: Precentral gyrus
|
1, 2
|
Precentral gyrus
|
3, 4
|
Central region: Postcentral gyrus
|
53, 54
|
Postcentral gyrus
|
5, 6
|
Central region: Rolandic operculum
|
17, 18
|
Rolandic operculum left
|
3, 4
|
Frontal lobe
|
7, 8
|
Frontal lobe: Lateral surface
|
3, 4
|
Superior frontal gyrus (dorsal)
|
7, 8
|
Middle frontal gyrus
|
11, 12
|
Inferior frontal gyrus (opercular)
|
13, 14
|
Inferior frontal gyrus (triangular)
|
9, 10
|
Frontal lobe: Medial surface
|
19, 20
|
Supplementary motor area
|
23, 24
|
Superior frontal gyrus (medial)
|
65, 66
|
Paracentral lobule
|
11, 12
|
Frontal lobe: Orbital surface
|
5, 6
|
Orbitofrontal cortex (superior)
|
9, 10
|
Orbitofrontal cortex (middle)
|
15, 16
|
Orbitofrontal cortex (inferior)
|
21, 22
|
Olfactory
|
25, 26
|
Orbitofrontal cortex (medial)
|
27, 28
|
Rectus gyrus
|
5, 6
|
Temporal lobe
|
13, 14
|
Temporal lobe: Lateral surface
|
67, 68
|
Heschl gyrus
|
69, 70
|
Superior temporal gyrus
|
73, 74
|
Middle temporal gyrus
|
77, 78
|
Inferior temporal gyrus
|
7, 8
|
Parietal lobe
|
15, 16
|
Parietal lobe: Lateral surface
|
55, 56
|
Superior parietal gyrus
|
57, 58
|
Inferior parietal lobule
|
59, 60
|
Supramarginal gyrus
|
61, 62
|
Angular gyrus
|
17, 18
|
Parietal lobe: Medial surface
|
63, 64
|
Precuneus
|
9, 10
|
Occipital lobe
|
19, 20
|
Occipital lobe: Lateral surface
|
45, 46
|
Superior occipital gyrus
|
47, 48
|
Middle occipital gyrus
|
49, 50
|
Inferior occipital gyrus
|
21, 22
|
Occipital lobe: Medial and inferior surfaces
|
39, 40
|
Calcarine cortex
|
41, 42
|
Cuneus
|
43, 44
|
Lingual gyrus
|
51, 52
|
Fusiform gyrus
|
11, 12
|
Limbic lobe
|
23, 24
|
Limbic lobe: Temporal pole (superior)
|
71, 72
|
Temporal pole (superior)
|
25, 26
|
Limbic lobe: Temporal pole (middle)
|
75, 76
|
Temporal pole (middle)
|
27, 28
|
Limbic lobe: Anterior cingulate gyrus
|
31, 32
|
Anterior cingulate gyrus
|
29, 30
|
Limbic lobe: Middle cingulate gyrus
|
33, 34
|
Middle cingulate gyrus
|
31, 32
|
Limbic lobe: Posterior cingulate gyrus
|
35, 36
|
Posterior cingulate gyrus
|
33, 34
|
Limbic lobe: ParaHippocampal gyrus
|
37,38
|
ParaHippocampal gyrus
|
13, 14
|
Insula
|
35, 36
|
Insula: Insula
|
29, 30
|
Insula
|

Feature selection
In order to reduce the feature dimension and filter out the most discriminative features, we adopted a combined feature selection method. First, we use the statistical t-test method for preliminary selection of features with the significant p value less than the threshold (p <0.05). Then, the redundant features are removed using the minimum redundancy and maximum correlation (mRMR) method, and only the features that can express the difference between groups in the minimum number are retained [24]. After the above two filter-based feature selections, the machine learning recursive feature elimination (SVM-RFE) method [25] is used to further reduce the feature dimension. After completing the entire feature selection steps, the optimal feature subset is obtained.
Classification using multi-kernel SVM
There are two types of features in the multiple brain network, one is the high-resolution ROI features in the fourth layer, and the other is the brain network features corresponding to different layers. Multi-kernel machine learning method can integrate these two types of features into a single classifier. Firstly, a Gaussian Radial Basis Function (RBF) kernel function is used to construct a kernel matrix for each type of feature. Secondly, the two kernel matrices are integrated into the multi-kernel matrix through appropriate weight coefficients [25]. Comparing the results of using linear kernel function and using RBF function (non-linear), we discover that the RBF kernel can significantly improve the classification performance. Therefore, we choose the RBF kernel function to construct the multi-kernel classifier. Finally, the optimal features subset can be obtained.
Cross-validation
The nested cross-validation method has been applied in our previous research. In the inner loop, the training set are used to determine the parameters of the classifier. In the outer loop, the testing set is used to evaluate the generalization ability of the classifier. It should be noted that at the beginning of the experiment, the entire data set was randomly divided into two parts, one for training and the other one for testing. The training set and testing set can be exchanged throughout the verification process, while the processing steps remain unchanged.