Theory
The equivariance of a tensor refers to the behavior that the tensor changes under the transformation of the coordination system, in contrast to the invariance of the scalar (e.g., the energy) which remains unchanged. The traditional GNNs embed the scalar information of graphs such as species and inter-atomic distance into the invariant features of nodes and edges, which are updated iteratively. GNNs in molecular and materials science widely adopt the atom-centric scalar prediction framework37, which expresses the energy of a crystal as the sum of the atomic energies predicted by the invariant features of atoms. Here, we generalize the atom-centric framework of GNNs to tensors and represent each crystal rank-M tensor property (\({T}^{M}\)) as the average of the atomic rank-M tensor contributions (\({T}_{j}^{M}\)). The key to the problem is how to predict the atomic tensors. The illustration of predicting the atomic rank-M tensors in this framework is shown in Fig. 1. This tensor prediction framework can be expressed by the following formulas:
$${T^M}=\frac{1}{N}\sum\limits_{{j=1}}^{N} {T_{j}^{M}},$$
1
$$T_{j}^{M}={\sum\limits_{{{k_i} \in N\left( j \right)}} {{t_{{k_1}{k_2} \cdots {k_n}j}}{{\left( {{{\vec {e}}_{{k_1}j}}} \right)}^{ \otimes {f_1}}} \otimes \left( {{{\vec {e}}_{{k_2}j}}} \right)} ^{ \otimes {f_2}}} \otimes \cdots \otimes {\left( {{{\vec {e}}_{{k_n}j}}} \right)^{ \otimes {f_n}}},$$
2
where N is the number of atoms in the cell, and \({\overrightarrow{e}}_{{k}_{i}j}\) is the unit edge vector from the neighboring atom ki to the central atom j. \(N\left(j\right)\) represents the set of all neighbor atoms of atom j within a cutoff radius \({r}_{c}\). \({\left({\overrightarrow{e}}_{{k}_{i}j}\right)}^{\otimes {f}_{i}}\) is the \({f}_{i}\)-fold (\({f}_{i}\ge 1\)) tensor product of \({\overrightarrow{e}}_{{k}_{i}j}\), and \({\sum }_{i=1}^{n}{f}_{i}=M\). \({t}_{{k}_{1}{k}_{2}\cdots {k}_{n}j}\) is a scalar coefficient of the cluster containing n individual neighboring atoms, which can be derived from the features \({m}_{{k}_{1}{k}_{2}\cdots {k}_{n}j}\) of the j-centered cluster. The summation in Eq. (2) runs over all the permutations of n neighbors of atom j. Eq. (2) is a general expression of the atomic rank-M tensor expansion which requires at most M individual edge vectors to construct the tensor bases. For symmetric crystal tensors, we can represent it as the average of the symmetric atomic tensors which can be expanded with the symmetric tensor bases. In this case, n can be smaller than M since the symmetric tensor bases can be constructed by the multi-fold tensor product of the edge vectors. This can greatly reduce the complexity of the formulas and the amount of unnecessary calculations.
One can easily show that the tensors predicted by ETGNN strictly satisfy the equivariance condition. The tensor product of two vectors \({u}^{\left({l}_{1}\right)}\) and \({v}^{\left({l}_{2}\right)}\) of orders l1 and l2 is a prescription to obtain another equivariant vector or tensor \({\left(u\otimes v\right)}^{\left({l}_{1}\otimes {l}_{2}\right)}\) in a higher-order vector space. Since each edge vector in the real space is rotationally equivariant, the tensors obtained by the tensor product of multiple edge vectors are also equivariant. Besides, the atomic tensor constructed by Eq. (2) satisfies the proper symmetry of the local structure as all the neighboring atoms [\({k}_{i}\in N\left(j\right)\)] of a central atom (i.e., the j-th atom here) are considered in the summation. If the cluster centered on atom j is symmetric with respect to the rotation or inversion operation R around atom j (i.e., \(\forall {k}_{i}\in N\left(j\right)\), \(R{\overrightarrow{r}}_{{k}_{i}}={\overrightarrow{r}}_{{k}_{i}^{\text{'}}}\), where \({k}_{i}^{\text{'}}\in N\left(j\right)\)), then the summation in Eq. (2) remains unchanged. It is a huge advantage that the atomic tensor expansion satisfies the symmetry of the atomic local structures inherently without learning the symmetry of the local structure through data augmentation or redundant network parameters. We also checked that the tensors predicted by our ETGNN model are indeed compatible with the symmetry of the examined system (see Supplementary Discussion 1 for details). Note that the equivariant tensors constructed by Eqs. (1) and (2) apply to all periodic and nonperiodic systems.
Below we will describe the details of the expansion of the tensors with respect to the edge directions taking force vector, BEC, DL, and PZ tensor as examples. The force vector and BEC of each atom can be directly represented by the corresponding atomic tensor contribution \({T}_{j}^{M}\), while the dipole moment, DL, and PZ tensors describing the whole crystal are represented by the average of the tensor contributions \({T}_{j}^{M}\) of each atom in the primitive cell.
The force on the jth atom is the sum of the forces from the surrounding neighbor atoms:
$${\vec {F}_j}=\sum\limits_{{k \in N\left( j \right)}} {{t_{kj}}{{\vec {e}}_{kj}}}.$$
3
The coefficients \({t}_{kj}\) are the magnitude of the forces between the atoms.
Since BEC is an asymmetric second-order tensor, the terms in Eq. (2) are explicitly split into the symmetric part \({T}_{j}^{sym}\) and the non-symmetric part \({T}_{j}^{non-sym}\):
$${T_j}=T_{j}^{{sym}}+T_{j}^{{non - sym}},$$
4
$$T_{j}^{{sym}}=\sum\limits_{{k \in N\left( j \right)}} {{t_{kj}}{{\vec {e}}_{kj}}} \otimes {\vec {e}_{kj}},$$
5
$$T_{j}^{{non - sym}}=\sum\limits_{\substack{ k,i \in N\left( j \right) \\ k \ne i } } {{t_{kji}}{{\vec {e}}_{kj}}} \otimes {\vec {e}_{ji}}.$$
6
\({T}_{j}^{sym}\) is expanded with the second-order symmetric tensor bases \({\overrightarrow{e}}_{kj}\otimes {\overrightarrow{e}}_{kj}\) constructed by the unit direction vectors \({\overrightarrow{e}}_{kj}\) between the atom pairs kj. \({T}_{j}^{non-sym}\) is expanded with the asymmetric tensor bases \({\overrightarrow{e}}_{kj}\otimes {\overrightarrow{e}}_{ji}\) constructed by the unit direction vectors \({\overrightarrow{e}}_{kj}\) and \({\overrightarrow{e}}_{ji}\). Since \({T}_{j}^{sym}\) and \({T}_{j}^{non-sym}\) require one and two individual neighbor atoms respectively, the expansion coefficients \({t}_{kj}\) and \({t}_{kji}\) are mapped from the hidden features of the edge messages (\({m}_{kj}\)) and the triplet messages (\({m}_{kji}\)) respectively. The embedding and updating of the triplet messages (\({m}_{kji}\)) will be illustrated in detail in the following network implementation section. It is important to note that the expansion formula of BEC can be applied to all second-order atomic tensors.
The DL tensor is a second-order symmetric tensor of the whole crystal and is not extended with the expansion of the primitive cell of the crystal. The DL tensors can be represented as the average of the atomic contributions of the second-order symmetric tensors \({T}_{j}^{sym}\). Therefore, the edge-based expansion of the DL tensor can be written as:
$$DL=\frac{1}{N}\sum\limits_{{j=1}}^{N} {\sum\limits_{{k \in N\left( j \right)}} {{t_{kj}}{{\vec {e}}_{kj}}} \otimes {{\vec {e}}_{kj}}},$$
7
The PZ tensor is a third-order symmetric tensor. According to the above discussion, the edge-based expansion formula of the PZ tensor is given as follows:
$$PZ=\frac{1}{N}\sum\limits_{{j=1}}^{N} {\left( {\sum\limits_{{k \in N\left( j \right)}} {{t_{kj}}{{\left( {{{\vec {e}}_{kj}}} \right)}^{ \otimes 3}}+} \sum\limits_{\substack{ k,i \in N\left( j \right) \\ k \ne i } } {{t_{kji}}{{\vec {e}}_{kj}} \otimes {{\left( {{{\vec {e}}_{ji}}} \right)}^{ \otimes 2}}} } \right)}$$
8
The first term is constructed by the 3-fold tensor product of \({\overrightarrow{e}}_{kj}\), and the second term is constructed by the tensor product of \({\overrightarrow{e}}_{kj}\) with the 2-fold tensor product of \({\overrightarrow{e}}_{ji}\). Both terms satisfy the symmetry of \({T}_{abc}={T}_{acb}\).
Network implementation
In traditional GNNs, the features of each atom are embedded in a high-dimensional space and updated by passing messages between atoms. The contribution of each atom to the scalar property of a crystal is finally obtained through an atom-wise regression neural network. According to the tensor prediction framework discussed in the previous section, the expansion coefficients of tensors up to order three are mapped from the features of the edges or the triplets. In this work, we designed the edge-based tensor prediction graph neural network (ETGNN), which is a modified version of DimeNet + + 38. Although in principle any common GNN can be adopted to predict tensor properties within our framework, here we adopt DimeNet + + as it is an invariant GNN that was shown to have high precision in the prediction of scalar properties22, 38. ETGNN obtains the tensor expansion coefficients by updating the learned embeddings of the edges and the triplets in the crystal graphs.
ETGNN has inherited the directional message passing framework used by DimeNet++, i.e. the atom features \({h}_{j}\), \({h}_{i}\) and the edge features \({u}_{ij}^{\left(k\right)}\) of atom pairs ji are integrated into the directional edge message \({m}_{ji}^{\left(k\right)}\) being sent from atom j to atom i, where k is the index of the multiplicity of the edges. Furthermore, ETGNN introduces an additional triplet component in the crystal graph, which is composed of node j and its two neighbor nodes k and i. The nodes, edges, and angles contained in the triplets are embedded into the triplet messages mkji. Using equivariant expressions of tensors proposed in the present work, ETGNN can predict both scalars and various tensor properties, such as forces \(\overrightarrow{F}\), BECs, DL, and PZ tensors. ETGNN can realize the following mapping:
$${f_\theta }:\left\{ {m_{{kj}}^{{\left( {{k_1}} \right)}}, \cdots ,m_{{ji}}^{{\left( {{k_2}} \right)}}, \cdots ,m_{{kji}}^{{\left( {{k_1},{k_2}} \right)}}, \cdots } \right\} \to R,\vec {F},BEC,DL,PZ, \cdots$$
9
where \(\theta\) is the learnable parameters of the network.
The architecture of ETGNN is shown in Fig. 2a. We kept the existing architecture in DimeNet + + and introduced the triplet embedding and the triplet update blocks. For tensor prediction, an edge-wise layer \({\varphi }_{edge}\) and a triplet-wise layer \({\varphi }_{triplet}\) are used to get the coefficients in the tensor expansion: \({t}_{kj}={\varphi }_{edge}\left({m}_{kj}\right)\), \({t}_{kji}={\varphi }_{triplet}\left({m}_{kji}\right)\).
Triplet embedding block. The triplet embedding block is shown in Fig. 2b. The triplet message is embedded based on the distances \(\left|{\overrightarrow{r}}_{kj}\right|\) and \(\left|{\overrightarrow{r}}_{ji}\right|\) between node j and its neighbor nodes k and i, the angle \({\alpha }_{kji}\) between \({\overrightarrow{r}}_{kj}\) and \({\overrightarrow{r}}_{ji}\), and the atomic numbers \({Z}_{k}\), \({Z}_{j}\) and \({Z}_{i}\). The spherical Bessel function \({\varphi }_{SBF}:\left({R}^{1},{R}^{1}\right)\to {R}^{{N}_{SRBF}\cdot {N}_{SHBF}}\) is used as the two-dimensional joint bases of the distances \(\left|{\overrightarrow{r}}_{kj}\right|\) and the angles \({\alpha }_{kji}\) in the triplets:
$${\phi _{SBF}}\left( {\left| {{{\vec {r}}_{kj}}} \right|,{\alpha _{kji}}} \right)=\sqrt {\frac{2}{{r_{c}^{3}j_{{l+1}}^{2}\left( {{z_{ln}}} \right)}}} {j_l}\left( {\frac{{{z_{ln}}}}{{{r_c}}}\left| {{{\vec {r}}_{kj}}} \right|} \right)Y_{l}^{0}\left( {{\alpha _{kji}}} \right)$$,
10
where \({j}_{l}\) is the first kind spherical Bessel function of order l, and \({z}_{ln}\) is its nth root, \(0\le l\le {N}_{SHBF}-1\), \(1\le n\le {N}_{SRBF}\). The expansion coefficients of the spherical Bessel function are multiplied by the cosine cutoff function to obtain the two-dimensional joint basis representation of \(\left|{\overrightarrow{r}}_{kj}\right|\) and \({\alpha }_{kji}\): \({a}_{SBF}^{\left(kj,ji\right)}={f}_{c}\left(\left|{\overrightarrow{r}}_{kj}\right|\right){\varphi }_{SBF}\left(\left|{\overrightarrow{r}}_{kj}\right|,{\alpha }_{kji}\right)\). Then the coefficients of the basis functions are linearly combined through a self-interaction layer \({\phi }_{SBF}:{R}^{{N}_{SRBF}\cdot {N}_{SHBF}}\to {R}^{{n}_{tri}}\), where ntri is the number of the hidden features of the triplets and ntri = 128 in this work. According to \({a}_{SBF}^{\left(kj,ji\right)}\), \({e}_{RBF}^{\left(ji\right)}\) and atomic numbers \({Z}_{k}\), \({Z}_{j}\) and \({Z}_{i}\), the embedding of the triplet can be constructed in the following formula:
$$m_{{kji}}^{{\left( 0 \right)}}=\sigma \left[ {W\left( {g\left( {{Z_i}} \right) \oplus g\left( {{Z_j}} \right) \oplus g\left( {{Z_k}} \right) \oplus {\varphi _{SBF}}\left( {a_{{SBF}}^{{\left( {kj,ji} \right)}}} \right) \oplus {\varphi _{RBF}}\left( {e_{{RBF}}^{{\left( {ji} \right)}}} \right)} \right)+b} \right],$$
11
where σ is ELU activation function39, g is a learnable atomic embedding layer.
Triplet update block. The embeddings of edge messages and triplet messages are updated through multiple edge update blocks and triplet update blocks. The goal of ETGNN is just to achieve the prediction of the tensor properties, so the edge update block adopts the original interaction layer in DimeNet + + to ensure that the modified network has the same accuracy as the original one. The triplet update block uses the directional message passing mechanism similar to that in the edge update block. The update of the features of the triplets can be expressed by the following formula:
$$m_{{kji}}^{{\left( {t+1} \right)}}={f_{update}}\left( {m_{{kji}}^{{\left( t \right)}},{f_{int}}\left( {m_{{kj}}^{{\left( t \right)}},m_{{ji}}^{{\left( t \right)}},e_{{RBF}}^{{\left( {kj} \right)}},e_{{RBF}}^{{\left( {ji} \right)}},a_{{SBF}}^{{\left( {kj,ji} \right)}}} \right)} \right),$$
12
where \({f}_{update}\) and \({f}_{int}\) are update function and interaction function respectively, which are implemented by neural networks. The hidden features of the triplets \({m}_{kji}^{\left(t\right)}\) are updated using the directional edge messages \({m}_{kj}^{\left(t\right)}\) and \({m}_{ji}^{\left(t\right)}\) at the same layer. The angle \({\alpha }_{kji}\) between the directional edge messages \({m}_{kj}^{\left(t\right)}\) and \({m}_{ji}^{\left(t\right)}\) is also explicitly embedded in the spherical Bessel function \({a}_{SBF}^{\left(kj,ji\right)}\) to update \({m}_{kji}^{\left(t\right)}\)
Tests and applications
We trained ETGNN with the forces of the molecular dynamics (MD) trajectories of the extended systems provided in ref. 40. The extended systems consist of the supercells of inorganic and organic crystals and also contain clusters, high entropy alloys (HEA), and two-dimensional materials. The organic polymer pyridine and bulk TiO2 contain two and three phases respectively, which can test the prediction accuracy of the model for different phases. High entropy alloy (HEA) CoCrFeMnNi contains five metal elements and each element is randomly arranged in the crystal, which is a great challenge to the model. DeepPot-SE40 and DeePMD41 are two representative neural network potential energy surface (PES) models, which calculate the forces by using the gradients of the potential energy: \({\overrightarrow{F}}_{i}=-{\nabla }_{i}E\). For each sub-systems, 90% randomly selected structures are used for training, and the remaining 10% are used for testing40. As can be seen from Table 1, the ETGNN model achieved lower RMSEs than DeepPot-SE and DeePMD on nearly all the sub-systems.
Table 1
Comparison of the root mean square errors (RMSEs) of the predicted forces for the extended systems by ETGNN, DeepPot-Se, and DeepPMD. The lowest RMSE among the considered models for each subsystem is displayed in bold. The unit is meV/Å.
System
|
Sub-system
|
DeepPot-Se
|
DeepPMD
|
ETGNN
|
|
Bulk Cu
|
FCC solid
|
90
|
90
|
89
|
Bulk Ge
|
Diamond solid
|
38
|
35
|
7
|
Bulk Si
|
Diamond solid
|
36
|
31
|
5
|
Bulk Al2O3
|
Trigonal solid
|
49
|
55
|
36
|
Bulk C5H5N
|
Pyridine-I
|
25
|
25
|
14
|
Pyridine-II
|
39
|
39
|
19
|
Bulk TiO2
|
Rutile
|
137
|
163
|
61
|
Anatase
|
181
|
216
|
82
|
Brookite
|
94
|
109
|
44
|
MoS2 + Pt
|
MoS2 slab
|
23
|
34
|
27
|
bulk Pt
|
84
|
226
|
27
|
Pt surface
|
105
|
187
|
37
|
Pt cluster
|
201
|
255
|
68
|
Pt on MoS2
|
94
|
127
|
33
|
CoCrFeMnNi HEA
|
rand. occ. I
|
394
|
481
|
373
|
rand. occ. II
|
410
|
576
|
385
|
Similar to the atomic force, some other tensors can be obtained as the first or second partial derivative of the energy with respect to the external field42. Here we compare the previous gradient-based tensor prediction method with our ETGNN models. We have successfully trained the ETGNN models on the dataset of the first-order dipole moment µ, the second-order polarizability tensor α, and the atomic second-order chemical shift tensors σ of two organic molecules of ethanol and allyl p-tolyl ether. The expansion formulas of the dielectric tensor and Born effective charge tensor were used to decompose the polarizability and the atomic chemical shift tensor respectively. We compared the results of ETGNN with FieldSchNet42 which computes these response properties using derivatives of total energies. Table 2 shows the comparison of the mean absolute errors (MAEs) of ETGNN and FieldSchNet trained on ethanol in vacuum and allyl p-tolyl ether in vacuum. As can be seen from Table 2, the prediction error of ETGNN for almost all the tensor properties listed in Table 2 is much smaller than that of FieldSchNet.
Table 2
Comparison of the mean absolute errors (MAEs) of ETGNN and FieldSchNet trained on ethanol in vacuum and allyl p-tolyl ether in vacuum. The lowest MAE between the two models for each property is displayed in bold.
Property
|
Unit
|
allyl p-tolyl ether (vacuum)
|
ethanol (vacuum)
|
FieldSchNet
|
ETGNN
|
FieldSchNet
|
ETGNN
|
|
µ
|
D
|
0.003
|
0.00024
|
0.004
|
0.0018
|
α
|
Bohr3
|
0.039
|
0.0202
|
0.008
|
0.025
|
σall
|
ppm
|
0.273
|
0.0293
|
0.169
|
0.00735
|
σC
|
ppm
|
0.301
|
0.0417
|
0.194
|
0.00721
|
σH
|
ppm
|
0.045
|
0.0202
|
0.123
|
0.00385
|
σO
|
ppm
|
2.732
|
0.0147
|
0.401
|
0.00286
|
Based on the tensor expansion formulas, we tested the prediction accuracy of ETGNN for the BECs, the DL tensors \({\epsilon }_{\infty }\) (electronic contribution), and the PZ tensors of crystals. The training datasets are from the density functional perturbation theory (DFPT) calculations on 3190 Al2O3, 3992 SiO2, 10000 HfO2, 3211 AlN, 3000 CdS, and 3000 ZnO random perturbed structures. HfO2 is a potential complementary metal-oxide-semiconductor (CMOS) compatible ferroelectric memory material and can form many crystal structures43–46. The perturbed structures of HfO2 consist of 5000 structures with 6-coordinated Hf ions and 5000 structures with 8-coordinated Hf ions. Each model was trained with 60% of the data, validated with 20% of the data, and tested with the remaining 20% of the data. The comparisons of the DFT calculated tensors and the ETGNN predicted tensors on the test datasets of the perturbed structures are shown in Fig. 3. As shown in Fig. 3(a − c), each component of the BECs predicted by ETGNN is very close to the corresponding component of the BECs calculated by DFT. The mean absolute errors (MAEs) of the components of the predicted BECs on Al2O3, SiO2, and HfO2 test sets are only \(1.18\times 1{0}^{-2}\) e, \(4.28\times 1{0}^{-3}\) e, and \(1.94\times 1{0}^{-3}\) e, respectively. The MAEs of the components of the predicted \({\epsilon }_{\infty }\) on Al2O3, SiO2, and HfO2 test sets are only \(5.54\times 1{0}^{-3}\), \(1.98\times 1{0}^{-3}\) and \(2.86\times 1{0}^{-3}\), respectively. For the third-order PZ tensor, ETGNN still maintains a high prediction accuracy. The MAEs of the components of the predicted PZ tensors on AlN, CdS, and ZnO test sets are only \(\text{3.29}\times 1{0}^{-4}\) C/m2, \(\text{4.69}\times 1{0}^{-4}\) C/m2, and \(\text{5.59}\times 1{0}^{-4}\) C/m2, respectively. The tensor prediction test on these perturbed structures indicates that invariant GNNs can achieve high prediction accuracy in principle by using the edge-based tensor expressions proposed in this work.
ETGNN has also been tested on the BECs and DL tensors \({\epsilon }_{\infty }\) of about 5000 non-metallic materials in the JARVIS-DFT database47. The comparison of the predicted values and the target values on the JARVIS-DFT dataset can be seen in Fig. S2 (Supplementary). This dataset contains more than 80 elements in the periodic table. The structures containing some rare earth elements in this dataset may have large calculation errors because the pseudopotentials are often hard to accurately describe the complex electronic structures of the rare earth elements. This will affect the prediction performance of the ML methods. For the JARVIS-DFT data set, 70%, 20%, and 10% of the calculated values were used as the training, validation, and test set, respectively. The MAE of ETGNN on the test set of BECs still reached a low value of 0.045 e. This high prediction accuracy means that ETGNN can be used as a universal model for calculating BECs with a much faster speed than DFT.
For comparison with the traditional ML models, an ETGNN model was trained for maximum BECs in the JARVIS-DFT dataset. For the maximum component of the BECs, the MAE of the ETGNN model on the test set is 0.12 e, which is only one fifth of that of the classical force-field inspired descriptors (CFID) model used in ref. 47. It was reported that large deviations between the experimental and the calculated dielectric constants can be seen for approximately 20% of the compounds48. Thus predicting the dielectric tensors of various crystals was thought to be a huge challenge. The ETGNN regression model for the dielectric tensors was successfully trained and achieved an MAE of 0.21 on the test dataset.
ETGNN automatically develops physical insights into the Born effective charge of materials while training on the JARVIS-DFT dataset. It can be observed from Fig. 4 that the embedded features from the ETGNN model trained for the mean diagonal value of BECs have a distinct separation for atoms with different mean diagonal values. The mean diagonal values of the data points in the principal component analysis (PCA) projection plot gradually increase from the upper left side to the upper right side. The mean diagonal values of the BECs of metal elements in oxides are positive, while the mean diagonal values of some group VIII metal elements such as Ru, Fe, and Co in multi-element alloys have negative minima. The mean diagonal values of group VA, VIA, and VIIA non-metallic elements increase sequentially. The BECs of atoms in some single-element crystals, such as Si and He, are almost zero, lying on the borderline between positive and negative BECs in the PCA projection plot. The mean diagonal values of hydrogen, alkali, group IIA, and group IIIA metal elements are positive and increase sequentially. Some transition metal cations have larger mean diagonal values of BECs than other elements and are located in the region closer to the upper right corner. The representation learned by ETGNN can be used to characterize and understand the large chemical space of the BECs of the materials.
In conclusion, we generalized the atom-centric framework of GNNs to tensors and proposed a general tensor prediction framework that decomposes the tensorial properties by the local spatial components projected on the edge directions of multiple-sized clusters. Based on this idea, we have given the explicit formula of the body-ordered expansions of general tensors. These expansions are rotationally equivalent, while the coefficients in the expansions are invariant and can be predicted by various traditional invariant GNNs. We designed ETGNN based on an invariant GNN to verify our tensor prediction framework. The precision of ETGNN outperforms the state-of-the-art models on a variety of tensor prediction tasks. For the force prediction on the extended systems, ETGNN outperforms DeePMD and DeepPot-SE on nearly all of the test systems. This shows that ETGNN can be used for precise potential fields of various molecules and crystals. For the response properties including the first-order dipole moment µ, the second-order polarizability tensor α, and the atomic second-order chemical shift tensors σ, ETGNN achieved much higher accuracy than that of FieldSchNet. This shows that ETGNN has a strong fitting ability for the response properties of various external fields. ETGNN was also used to predict the BECs, DL, and PZ tensors of the random perturbed structures and achieved very high accuracy. The evaluation of ETGNN on these perturbed structures indicates that the tensor expansions proposed in this work are valid and general. Besides, we have successfully trained the regression models of ETGNN for predicting the BECs and DL tensors of about 5000 non-metallic materials containing more than 80 elements. Previous work could not handle the tensors and only successfully trained the regression models for the maximum component of these tensors47. Taking into account the errors of the tensors calculated by DFPT, the mean absolute error of the predicted tensors in this dataset is relatively low. Our study suggests that ETGNN is a general, efficient and accurate framework for GNNs to predict the tensorial properties.