Fuzzy analytic hierarchy process with ordered pair of normalized real numbers

The Analytic hierarchy process (AHP) is a widely used multi-criteria decision theory, and most AHP relies on the judgments of experts to derive priority scales. However, the judgments of experts may be subjective. Using machine learning algorithms for decision-making can be more objective, but machine learning algorithms are strongly related to the collected data and not being flexible enough. This paper tries to combine experts’ judgments with algorithmic judgments to improve the bias of experts’ judgments while still making decision-making flexible. In this paper, the authors introduce the ordered pair of normalized real numbers (OPNs) into the AHP method for the first time and propose the fuzzy analytic hierarchy process with the OPNs (OFAHP). The OFAHP uses OPNs to combine experts’ judgments with those of machine learning algorithms and then make decisions by OPNs. Experiments on real data sets show that the proposed method can get reasonable decision results. Moreover, when the experts’ judgments are wrong or invalid, the judgments given by the machine learning algorithm can correct the experts’ judgments to obtain a reasonable decision-making result.


Introduction
The Analytic Hierarchy Process (AHP) was first proposed by T.L.Saaty, which is an effective method to solve complex multi-criteria decision-making problems. Besides, it can help the decision-makers in making the best decisions. The main advantage of the AHP is to realize the combination of quantitative and qualitative analysis, which eliminates the judgment error caused by the excessive influence of qualitative factors in traditional decision-making methods. Until now, many researchers have studied the AHP and applied it in many fields, making the AHP become one of the most widely used multi-criteria decision-making tools. Many outstanding works have been published based on AHP: they include applications of AHP in different fields such as planning, selecting the best alternative, resource allocations, logistics location, resolving conflict, optimization, risk assessment, etc (Vaidya and Kumar 2006;Ho and Ma 2018).
In 1965, the Fuzzy set theory was first proposed by Zadehh (Kahraman et al. 2016). It describes the uncertainty of things by the membership. The main idea of the fuzzy set is to use any value on the closed interval [0,1], as the membership of the element to the fuzzy set, instead of the membership of the element to the traditional set which can only be 0 or 1. The fuzzy set expands the application scope of mathematics from the deterministic field to the fuzzy field and from the precise phenomenon to the fuzzy phenomenon. At present, the Fuzzy Set Theory has been widely used in different domains, such as fuzzy clustering (Ruspini et al. 2019), fuzzy programming, fuzzy control, Fuzzy prediction, etc. In 1970, Bellman and Zadeh introduced the Fuzzy Set Theory into multi-criteria decision-making for the first time (Bellman and Zadeh 1970). They proposed the concept and model of fuzzy decision analysis to solve the problem of uncertainty in decision-making.
Because most of the decision-making problems in real life are uncertain, fuzzy decision-making is more in line with the actual situation. At present, fuzzy decision-making has become one of the most important applications of the Fuzzy Set Theory (Blanco-Mesa et al. 2017).
Since its establishment, the Fuzzy Set Theory has been continuously developed, and many new fuzzy sets have been proposed one after another. For example, Zadeh introduced the type-2 fuzzy sets in 1975, Smarandache introduced the neutrosophic sets in 1999, Torro and Narukawa introduced the hesitating fuzzy sets in 2010, and Gündogdu introduced the spherical fuzzy sets in 2019. Among all fuzzy sets, the intuitionistic fuzzy set is one of the most widely used fuzzy sets, which was proposed by Atanassov (1989), as an extension of the traditional fuzzy set. Each fuzzy number in intuitionistic fuzzy sets includes three aspects of information: membership degree, non-membership degree, and hesitation degree. Because the hesitation degree reflects a neutral attitude towards things, so it is more in line with the nature of uncertainty in the real world. At present, researchers have done many kinds of research on the intuitionistic fuzzy set, including using intuitionistic fuzzy set to multi-attribute decision-making. For example, Li (2005) studied the problem of multi-attribute decision-making based on intuitionistic fuzzy sets in 2005, and a corresponding decision-making method has been proposed.
In the AHP, when comparing the importance of criteria by pairs, we need to choose a certain number from 1 to 9 as the score according to the given scale. However, the decisionmakers are limited to their knowledge level and other factors, so it may be difficult or unable to give a certain value. Therefore, researchers combine the Fuzzy Set Theory with the AHP to produce the Fuzzy Analytic Hierarchy Process (FAHP). In FAHP, the pairwise comparison results between criteria are represented by a fuzzy number, which will be closer to the actual situation in real life. In order to adapt to more complex situations, Xu and Liao (2014) proposed the Intuitionistic Fuzzy Analytic Hierarchy Process (IFAHP) based on the Intuitionistic Fuzzy Set Theory, which extended the AHP and the FAHP into the IFAHP. Moreover, all the preferences are represented by intuitionistic fuzzy numbers in IFAHP. Compared with AHP, IFAHP also has been widely used for the reason that IFAHP can handle some more complex problems, especially when the decision-maker has some uncertainty in making decisions. Although IFAHP may have more advantages in dealing with complex problems, all the proposed AHP, FAHP and IFAHP decision-making methods depend on the judgment given by experts. However, the judgment given by experts is often subjective, which will be affected by the cognitive level and emotional tendency of experts themselves. The judgment given by different experts is often different, so the decision-making may also be biased.
Nowadays, many decision problems can be attributed to the classification problems in machine learning, so we can also use machine learning algorithms to make decisions. For example, we can use the decision tree, support vector machine, logistic regression, and other algorithms to make decisions. At present, the decision-making method based on machine learning algorithms has been applied in many fields, such as Jiang et al. 2021. Applied machine learning algorithm to support decision-making in emergency department triage for patients. The decisions made by machine learning algorithms are objective, and the efficiency and accuracy of decision-making often surpass human beings in some cases. However, most of the current machine learning decision algorithms are strongly related to the collected data. In some cases, collecting enough data is challenging, as well as ensuring the quality of collected data, which will affect the accuracy of machine learning algorithm decision-making. At the same time, it is tough for machine learning algorithms to introduce background knowledge in related fields explicitly like AHP, and the interpretability of many machine learning algorithms still needs improvement (de Laat 2018).
To sum up, this paper attempts to combine the subjective judgments given by experts with the objective judgments given by the machine learning algorithm in the traditional FAHP method to make better decisions. Therefore, this paper proposes the OFAHP, a new method that combines experts' judgments with machine learning algorithms through OPNs, and then computes and makes decisions through OPNs. Finally, the authors verified that OFAHP is feasible on a real dataset and that the judgments given by the machine learning algorithm can still help make reasonable decisions when experts'judgments are invalid. The innovations of this paper are as follows: (1) The authors use OPNs, a new mathematical theory, to combine the subjective judgments of experts with the objective judgments given by the machine learning algorithms, and then calculate OPNs. Thus, it avoids the subjectivity of the traditional analytic hierarchy process and the limitations of using machine learning algorithms for making decisons, which are greatly affected by collected data.
(2) The authors verify that in the method proposed in this paper, when the experts' judgments are wrong or invalid, the judgments given by the machine learning algorithm can correct the experts' judgments in a certain range, to get a reasonable decision still.

Related work
In this section, we briefly review the four fuzzy AHP and three decision tree algorithms in machine learning that are most relevant to the work of this paper.

Fuzzy analytic hierarchy process
Considerable research efforts have been performed in the field of AHP. The fuzzy analytic hierarchy process was first proposed by Buckley et al. in 1985, and this fuzzy analytic hierarchy process is based on the Fuzzy Set Theory proposed by Zadeh (Buckley 1985). Subsequently, With the continuous development of the Fuzzy Set Theory, many FAHP methods have been proposed, such as triangular FAHP, trapezoidal FAHP, Intuitionistic FAHP, Pythagorean FAHP, and Neutrosophic FAHP (Yucesan and Gul 2021). Here, we briefly review four widely used FAHP methods proposed in recent years.

Intuitionistic fuzzy analytic hierarchy process
In the complex social environment, people are often hesitant to make judgments about events. In the traditional fuzzy concentration, it cannot reflect the situation of hesitation. So Atanassov proposed the Intuitionistic Fuzzy Set (IFS) as an extension for the fuzzy set proposed by Zadeh (Wu and Xu 2015). Xu and Liao proposed the intuitionistic fuzzy analytic hierarchy process (IFAHP) in 2014. When using IFAHP, we need to consider the hierarchy of problems and identify the objective, criteria, sub-criteria, and alternates. Then we compare each criterion and sub-criteria in pairs and compare alternatives under each criterion or sub-criteria, scoring according to the given scale to construct intuitionistic preference relations. After all preference relations are constructed, we need to verify the consistency of all preference relations to avoid self-contradictory situations in preference relations. If the consistency check fails, the decision-maker must reconstruct the intuitionistic preference relations. If the consistency check passes, we should calculate the priority vector of each intuitionistic preference relation and fuse all the weights. Finally, rank the overall weights and make a decision. Moreover, the scale regarding the relative importance degrees is denoted as IFNs and adopts the operation method of IFNs In IFAHP (Xu and Liao 2014).
In various AHP methods, it is very important to check the consistency of each preference relation. If the check fails, the inconsistent intuitive preference relationship usually returns to the decision-makers, and the decision-makers will re-evaluate it until the intuitive preference relationship is accepted. However, it is very troublesome to make experts reevaluate, sometimes even challenging to achieve. In IFAHP, Xu and Liao proposed a method to check the preference relation. They also developed a new algorithm that can automatically repair inconsistent intuitionistic fuzzy preference relationships instead of manually repairing them by decisionmakers.

Hesitant fuzzy analytic hierarchy process
In many decision-making problems, desicion-makers are often facing a difficult choice. There may be several similar files, and it is not easy to make judgments. To solve this problem, Torro and Narukawa proposed hesitating fuzzy sets in 2010 (Xia and Xu 2017), and the hesitant fuzzy allows the membership of its elements to have several possible values.
In 2015, Oztaysi et al (2015) proposed the hesitant fuzzy analytic hierarchy process. In addition, they use the hesitant fuzzy analytic hierarchy process to solve the problem of multicriteria supplier selection.

Pythagorean fuzzy analytic hierarchy process
Although the degree of membership and the degree of nonmembership is proposed in IFS, IFS cannot describe the case that the sum of membership and non-membership is greater than 1, so Yager proposed the Pythagorean fuzzy set (PFS) theory in 2013 (Peng and Selvachandran 2019). In the PFS, the membership grade μ α and non-membership ν α satisfying the condition μ α 2 + ν α 2 ≤ 1. Therefore, it has fewer restrictions than the IFS and can adapt to more situations (Peng and Selvachandran 2019).
According to the PFS theory, Mohd and Abdullah (2017) proposed the Pythagorean fuzzy analytic hierarchy process for multi-criteria decision-making. In this study, the authors propose the determination of the weight of the criteria method in a decision-making problem under the PFS.

Spherical fuzzy analytic hierarchy process
In 2019, Gundogdu and Kahraman (2019) introduced the spherical fuzzy sets (SFS), one of the latest fuzzy set theories, expands other fuzzy sets by setting a membership function on a spherical surface (Kutlu Gundogdu and Kahraman 2020).
The spherical fuzzy analytic hierarchy process as one of the latest fuzzy AHP methods was proposed in 2020. Kutlu Gundogdu and Kahraman (2020) proposed the spherical fuzzy analytical hierarchy process, and they use this method to locate the wind power farm, which verified the effectiveness of this method.

Decision tree algorithm
In machine learning, many algorithms can be used for decision-making. However, most machine learning algo-rithms do not have good interpretability due to the complex calculation process, and can not show the processing process of the algorithm. The decision tree algorithm splits the complex decision-making process into a series of simple choices by generating a tree-like model to intuitively explain the entire decision-making process. Therefore, the authors chose the decision tree algorithm as the Calculation method for objective judgment in this paper. This section briefly introduces three currently widely used decision tree algorithms.

ID3 algorithm
The ID3 (Iterative Dichotomiser 3) algorithm as the most classical algorithm in the decision tree algorithms, is a decision tree algorithm based on the information gain (Yasami and Mozaffari 2010). Many decision tree algorithms are improved based on the it. The core of the ID3 algorithm is to select attributes on all nodes in the decision tree and use the information gain as the attribute selection standard. The largest category information about the tested example can be obtained when each non-leaf node is tested.

C4.5 algorithm
The C4.5 algorithm was proposed by J.R. Quinlan based on the ID3 algorithm (Damanik et al. 2019). It uses information gain rate as the criterion to select split attributes, and the algorithm prunes simultaneously while building the tree model, reducing the tree model's size. Moreover, the C4. 5 algorithm can handle continuous attributes and datasets with missing values.

CART algorithm
The CART (classification and regression trees) algorithm is a decision tree algorithm used for classification and regression. The CART algorithm selects the attribute with the minimum Gini coefficient value as the splitting attribute. According to the splitting attribute of the node, uses the binary recursive segmentation method to divide each internal node into two child nodes to form a simple binary tree recursively (Rutkowski et al. 2014).
By studying the existing fuzzy analytic hierarchy process and decision tree algorithm, the authors find that these methods have the following problems: 1. The current fuzzy AHP methods depend on experts' judgments. However, experts' judgments are subjective, and the judgments given by different experts are often different, affecting whether the decision-making made by fuzzy AHP is reasonable. 2. Decision tree algorithms make objective decisions, but decision tree algorithms rely on data. In order to have high decision accuracy, a large amount of high-quality data is needed, which is often challenging in reality.
In order to solve these two problems, this paper proposes a new method, which combines the experts' judgments with the decision tree algorithm's judgments to form OPNs, and then calculates the OPNs to obtain the final decision. This method realizes the complementarity of the expert's and algorithm's judgments. It avoids the shortcomings of the subjectivity of experts, and insufficient accuracy of machine learning algorithms in the case of insufficient data and low quality. Moreover, when the experts' judgments are invalid or wrong, the judgments given by the algorithm can correct the judgments given by the expert within a certain range so that the decision-making result is still reasonable.

Preliminaries
The ordered pair of normalized real numbers (OPNs) is a new mathematical concept introduced by Zhou (2020), as a generalization of the Intuitionistic fuzzy numbers (IFNs).
In this section, some basic definitions of OPNs used in this paper are introduced.
In the previous fuzzy set, if μ α is larger, then v α should be smaller. At the same time, there are some requirements for the sum of μ α and v α , the sum of squares of μ α and v α , etc. But OPNs are different from them. OPNs have fewer restrictions, μ α can be larger, and v α can also be larger, as long as 0 < μ α , v α < 1. So OPNs make it possible to combine experts' judgments and the machine learning algorithms' judgments.
Definition 4 For c ∈ R and x, y ∈ (0, 1), the basic ψoperations of ψ -scalar-multiplication, ψ -addition and ψ -multiplication are define as: Zhou (2020) provide a function ψ, a simple but applied one as (4): Definition 5 Let α, β be OPNs, and c be real. We can define the scalar multiplication, addition and multiplication of OPNs as follows:

The FAHP with OPNs
In this part, the fuzzy analytic hierarchy process with Ordered pair of normalized real numbers(OFAHP) is introduced. The flowchart of specific steps in OFAHP are shown in Fig. 1.
Step 1 Consider the hierarchy of problems, and identify the objective, criteria, sub-criteria, alternates (Xu and Liao 2014). Then, go to Step 2.
Step 2 Determine the preference relationship by pairwise comparison between each criterion. At the same time, comparing alternatives under each criterion. The expert first determines the preference relations according to the specified standards and gives the matrix of the corresponding preference relations. In addition, the corresponding preference relations matrix is generated using the algorithm provided in this paper to calculate the preference relations. Then, go to Step 3.
Step 3 Combining the preference relations matrix given by the expert and the preference relations matrix calculated by machine learning algorithm using OPNs to form a new preference relations matrix. In the new preference relations matrix, each value is represented by an OPN, and in each OPN, The first number is given by experts, and the machine learning algorithm calculates the second number. Then, go to Step 4. In Step 3, the authors' goal is to fuse the preference relationship given by experts and the preference relationship given by the machine learning algorithm into a new preference relationship by OPNs.
Step 4 Checking the consistency of all preference relations. If the consistency check is passed, go to Step 6. If the consistency check is not passed, go to Step 5. The purpose of step 4 is to find out whether there are errors or contradictions in all preference relationships.
Step 5 Repair the inconsistent fuzzy preference relations and then go to Step 4. In most AHPs, inconsistent preference relationships can be repaired by decision-makers, but this is not convenient. Because OPN is a generalization of IFNs, the automatic repair algorithm proposed by Xu and Liao in IFAHP can also be used. In this paper, we adopt the algorithm to repair inconsistent fuzzy preference relations.
Step 7 Fusing all the weights by the operational rules of OPNs, and then chooseing the best alternative according to overall weights. The purpose of this step is to arrive at the final decision. Step 8 End. The detailed steps of OFAHP are described below.

Decomposition of complex multi-criteria decision problems
Like AHP, IFAHP, and FAHP, we must decompose complex multi-criteria decision problems. In order to apply OFAHP, it is necessary to organize comprehensive questions into different levels according to the attributes under consideration.
In this paper, we suppose that A = {A 1 , A 2 , . . . , A n } is the finite set of n alternatives, and C = {C 1 , C 2 , . . . , C m } is the set of criteria with which the elements of A are compared in the hierarchical structure (Xu and Liao 2014).

Comparative judgments with ordered pair of normalized real numbers
After we decompose the complex multi-criteria decision problem into different levels, we can compare the relative importance of elements in one level relative to the elements in the previous level to establish fuzzy preference relationships (Xu and Liao 2014). "Comparative judgments with ordered pair of normalized real numbers" section mainly introduced how to compare the importance of criteria through machine learning algorithms, and get the preference relationship between the alternatives and the criterion. In the OFAHP, each result of pairwise comparison is represented by an OPN. If all the pairwise comparison values are represented by OPNs and are stored in a fuzzy preference relation, a fuzzy preference relation is an OPN. In the method proposed in this paper, the value of μ ik is determined by expert scores, and the value of v ik is determined by machine learning algorithms according to past data. Both μ ik and v ik take values in the interval (0,1).
For μ ik , according to the 0.1−0.9 scale proposed by Xu and Liao (2014), it will be determined by decision-makers, the 0.1−0.9 scale is shown in Table 1. In the fuzzy preference relation, μ ik indicates how important A i is than A k . For Other values between 0 and 1 (not 0 and 1) Intermediate values used to present compromise example, μ ik = 0.5 means that A i and A k are equally important; μik > 0.5 indicates that A i is more important than A k , and μik < 0.5 indicates that A k is more important than A i .

Compare the importance of criteria through the ID3 algorithm
For v ik , in the case of the existence of previous data, the authors use the machine learning algorithms to determine objectively. C = {C 1 C 2 ..., C m } is the set of criteria with which the elements of A are compared in the hierarchical structure, the authors provide an algorithm to compare the importance between C i and C m .
To compare the importance of criteria, the authors adopted the ID3 (Iterative Dichotomiser 3) algorithm (Shao et al. 2001). ID3 algorithm is one of the widely used machine learning algorithms, this algorithm builds a decision tree from the discrete data. For each node in the decision tree, the algorithm will select the attribute with the highest information gain as the best attribute. After constructing the decision tree, we can calculate the importance value of each attribute according to the information gain of each node and the number of samples, this important value is objective (Yasami and Mozaffari 2010).
Let the sample is S, its size denoted by |S|, and B is one of condition attributes of the set. D is the decision attribute and has k values. S is divided into several categories: The formula of entropy is shown as (10) (Guan and Zeng 2011): Where p i is the probability that any subset of S belonging to category C i , and p i = |C i |/|S|. Supposing that using condition attribute B to divide S, B has j different values, then S can be divided into j subset s S 1 , S 2 , ..., S j . The information gain can be represented as (11).
After constructing the decision tree, we calculate the importance of each attribute. Suppose we want to calculate the importance of attributes. For each node, first, multiply the node's entropy value by the node's number of samples, then subtract the entropy value of its child nodes and multiply the number of samples of the child nodes. Finally, we add up all the calculation results. After calculating the importance of each attribute, we normalize all the results. Here, the greater the importance value, the more important the attribute.

Example 4
The authors use iris data set (Dua and Graff 2017) and the ID3 algorithm to construct a decision tree. Each data in iris dataset has four attributes as X In the decision tree, there is a node whose judgment is based on attribute x[2], so the importance of attribute X [2] is equal to the value of the entropy of the node multiplied by the number of samples, and then the value of the entropy of the two child nodes multiplied by the number of samples of the child nodes is subtracted.
Then we calculate the importance value of attribute X [3].
In the decision tree, there are two nodes with x[3] as the judgment condition.  Then, we compare the importance of attributes in pairs and generate a matrix. The result of the comparison is that the importance of the two attributes is subtracted, and finally, the method in Definition 6 is used for normalization to obtain the final matrix. In particular, when using expert evaluation, the larger the value of μ ik , the higher the degree of preference. The value of v ik is just the opposite, the smaller the value of v ik , the higher the degree of preference.

The algorithm to get the preference relation between the alternatives and the criterion
First of all, we can calculate the average value of each criterion in each alternative from the previous data. Then according to the calculated average value, get the preference relationship between the alternatives and the criterion by the algorithm. Let A = {A 1 , A 2 , A 3 . . . , A n } is the finite set of n alternatives, and C = {C 1 , C 2 , C 3 , . . . , C m } is the set of criteria with which the elements of A are compared in the hierarchical structure.V n = {v n1 , v n2 , . . . , v nm } is the average value of m criteria under alternative A n . C new = {C new1 , C new2 , . . . , C newm } is a new set of the values for criteria, we can take the following steps get it's preference relation between the alternatives and the criterion, and get the preference relation matrix.
Step 1 Under each alternative, the average value of each criterion in the previous data is calculated respectively. V n = {v n1 , v n2 , . . . , v nm } is the average value of m criteria under alternative A n .
Step 2 C new = {C new1 , C new2 , . . . , C newm } is a new set of the values for criteria. For criteria C newm , the set of its preferences for n alternative is p m = { p m1 , p m2 , . . . , p mn }, and p mn = |C newm − v nm |.
Step 3 Get the preference relation matrix. If there are m criteria, we will establish m matrices, each of which is a n * n matrix. The matrix U m represents the preference for each alternative on the C m .And in U m , U (i j) m is the result of preference comparison between A i and A j on C m , U Step 4 Using the method in Definition 6 to normalize the numbers in each matrix, making every number in the matrix be in the range (0,1).
Example 6 Take iris data set as an example. There are three categories in iris dataset, which are represented by A 1 , A 2 and A 3 respectively. Each data has four attributes, which are represented by C 1 , C 2 , C 3 and C 4 . The authors select the first 20 data of each alternative in iris dataset for calculation.
Through calculation, under alternative A 1 , the set of average values of the four criteria is

Constructing the preference relations with OPNs
In this paper, a preference relation matrix given by experts and the corresponding preference relation matrix calculated by the machine learning algorithm needs to be integrated into a preference relation matrix with OPNs. If the expert gives a n * n matrix U 1 to show the preference relation of alternatives with respect to a criterion (or the preference relation of criteria with respect to the overall objective), each value in the matrix U 1 is represented by ik . In addition, the corresponding n * n matrix U 2 is calculated through the algorithm described in "The algorithm to get the preference relation between the alternatives and the criterion" section or 4.2.1, each value in the matrix U 2 is represented by U ik . The final matrix U is composed of the combination of these two matrices, each value in the matrix U is represented by U ik , U ik is an OPN and U ik = (μ ik , v ik ).
Example 7 If the expert gives a 4 * 4 matrix U 1 to show the preference relation of alternatives with respect to the criterion C 1 , and the algorithm described in "The algorithm to get the preference relation between the alternatives and the criterion" or "Compare the importance of criteria through the ID3 algorithm" sections calculates the corresponding matrix U of 4 * 4.

Consistency checking
Before deriving the priorities of the alternatives and criteria, we need to check the consistency of the fuzzy preference relationships. Only after the consistency check passes can we proceed to the next step. Nevertheless, if the consistency check fails, we need to repair the inconsistent fuzzy preference relationship. At present, the most commonly used consistency checking method in AHP is proposed by Saaty (1977). The consistency index (CI) is calculated as (12): The Consistency ratio (CR) is expressed as (13).
Among them, n is the dimension of the multiplicative preference relationship, λ max is the maximum eigenvalue of the preference relationship matrix, and R I is a random index dependent on n. It is generally believed that when C R ≤ 0.1, the multiplicative preference relationship has acceptable consistency, passing the consistency checking. However, this method has a problem: if the consistency check is not passed, we can only let the experts to give a new preference relation matrix, which is difficult or even unrealistic in many cases.
In 2014, Xu and Liao (2014) proposed IFAHP and adopted a new consistency checking method in IFAHP. Besides consistency checking, this method can also automatically repair inconsistent fuzzy preference relations into consistent ones. Because OPN is a new fuzzy mathematical concept as a generalization of the Intuitionistic Fuzzy Numbers, this method is also suitable for the fuzzy analytic hierarchy process with OPNs. Therefore, this paper uses this method to perform a consistency check and repair the inconsistent introductory preference relation.
In the method proposed by Xu and Liao (2014), let R = (r i j ) n * n be an preference relationan, an algorithm is first required to construct a perfect multiplicative consistent preference relationR = (r i j ) n * n : Step1 Step2 and τ is the consistency threshold.
If the consistency check fails, Xu provides an algorithm to repair the inconsistent preference relation, the steps of the algorithm are described as follows Xu and Liao (2014): Step 1 Let t be the number of iterations and t = 1, construct the complete multiplicative consistent preference relationship bar R, from R (t) , through the algorithm of constructing the complete multiplication consistent preference relationship.
Step 2 Calculating the distance d(R (t) ,R) between R (t) andR. If d(R (t) ,R) < τ, then output R (t) ; If not, go to the Step 3.
Step 3 Constructing the fused preference relation And λ is a controlling parameter determined by the decision maker: The smaller the value of λ, the closer i j . Lett = t + 1, and then go to Step2.
Through this algorithm, we can automatically improve the consistency level of the fuzzy preference relationships without losing a large amount of original information to pass the consistency check.

Calculate the priority vector
After performing consistent checking and get the consistent preference relation, we can calculate the priority vector ω = (ω 1 , ω 2 , ω 3 , ..., ω n ) of each preference relation. Because OPNs are generalization of the IFNs, so we adopt the calculation method of priority vector in IFAHP, The priority vector (Xu and Liao 2014) is calculated as (19): After calculating all priority vectors, we fuse all the weights from the lowest level to the highest level by the operational rules of OPNs, ranking the overall weights, and then choosing the best alternative. The OPNs calculation method is described in Definition 4, and the method of ranking the overall weights is described in Definition 3. Finally, choose the best alternative.

Complexity analysis
Consistent with existing related work, we evaluate the complexity of the proposed method in this paper by the time complexity T , which is based on the number of multiplications within the method (Lima Junior et al. 2014). Considering that there are n alternatives and m criteria, the method proposed in this paper requires n * (n − 1) operations to construct the preference relation of criteria with respect to the overall objective, 2n * n operations to calculate the priority vector, 2n * n * m operations to calculate the weights of the alternatives over the criteria, and finally n * m times the score of each alternative is calculated. The time complexity of the method can be expressed as (20):

Experiment
Because the method proposed in this paper needs to use some past data, the authors use the Iris data set (Dua and Graff 2017) to test the method and get reasonable results. The Iris data set is a very widely used dataset in machine learning. In this part, the authors use three examples. All of these examples are from the iris data set. The first two examples are different data in the iris data set. And in these two examples, experts and machine learning algorithms give reasonable judgments. Through these two examples, the authors have verified that the method proposed in this paper can make a reasonable decision. The third example is modified from the second example, and the authors change the reasonable judgments given by experts to wrong judgments. The third example is used to verify that with the proposed method, the judgment given by the machine learning algorithm can correct the wrong judgment given by the expert so that the final decision made is still correct.   Fig. 4 The decision tree which constructed by ID3 algorithm and 147 data in iris data set In the beginning, we need to construct the hierarchy of problems. The iris data set contains three classes: Iris Setosa, Iris Versicolour, and Iris Virginica, each class contain 50 instances, and each class is linearly separable from the other two classes. For each instance, it has four attributes: sepal length (unit: cm), sepal width (unit: cm), petal length (unit: cm), and petal width (unit: cm). Therefore, the hierarchical structure of the problem we constructed is shown in Fig. 3.
There are a total of four criteria considered for this question: sepal length(C 1 ), sepal width(C 2 ), petal length(C 3 ), and petal width(C 4 ). And three alternatives: Iris Setosa( A 1 ), Iris Versicolour(A 2 ), Iris Versicolour(A 3 ). The hierarchy consists of three levels. The overall objective is placed at Level 1, criteria at Level 2 and alternatives at Level 3. Table 3 Preference relation of criteria with respect to the overall objective given by expert C 1 0.5 0.5 0.7 0.9 C 2 0.5 0.5 0.6 0.9 There are a total of 150 instances in the Iris data set. After removing the three instances we want to test, the remaining 147 instances are used to pass the algorithm and compare the criteria' importance to the overall goal. We use the algorithm proposed in "Compare the importance of criteria through the ID3 algorithm" section to calculate, the decision tree generated by the ID3 algorithm for these 147 instances is shown in Fig. 4.
According to the method mentioned above and Fig. 4, we can calculate the importance scores of the four criteria as: importance(C 1 ) = 0, importance(C 2 ) = 0.0118 importance(C 3 ) = 0.0786, importance(C 4 ) = 0.9096  Table 6 The preference relation of alternatives with respect to the criterion C 1 given by experts The given preference relationship is shown in Table 2.

Examples with experts give reasonable judgments
We compare the importance of the criteria to the overall goal. According to the 0.1−0.9 scale in Table 1, the preference relation of criteria with respect to the overall objective which was given by expert is shown in Table 3. We have synthesized the preference relations given in Tables 2 and 3, and obtained the final preference relation of criteria with respect to the overall objective which was expressed in OPNs. The final preference relation of criteria with respect to the overall objective is shown in Table 4.
We check the consistency of the preference relations in Table 4, and repair the inconsistent preference relations. The results after repair are shown in Table 5.
According to Table 5, we can use (19) to calculate the priority vector:  Table 7 The preference relation of alternatives with respect to the criterion C 2 given by experts Table 8 The preference relation of alternatives with respect to the criterion C 3 given by experts A 3 0.9 0.7 0.5 Table 9 The preference relation of alternatives with respect to the criterion C 4 given by experts A 3 0.9 0.6 0.5   Table 12 The preference relation of alternatives with respect to the criterion C 3 calculated by algorithm A 3 0.00000012 0.3415546 0.5 Table 13 The preference relation of alternatives with respect to the criterion C 4 calculated by algorithm From the remaining 147 instances of iris data set, the average values of three alternates are calculated as

Experiment case 1
We select the last instance of the first class from the iris data set for an experiment, judge the category of the example through the method proposed in this paper, compare it with the actual situation to see whether the algorithm proposed in this paper can give a reasonable conclusion. The value of this instance is I 1 = [5, 3.3, 1.4, 0.2].
Firstly, four preference relations of alternatives with respect to the criteria are given by experts, the four preference relations are shown in Tables 6, 7, 8 and 9.
By comparing the value of I 1 with Average(A 1 ), Average(A 2 ) and Average(A 3 ), we can get four preference relations of alternatives with respect to the criteria through the algorithm in "The algorithm to get the preference relation between the alternatives and the criterion" section. The four preference relations are shown in Tables 10, 11, 12 and 13. We combine Tables 6 and 10, Tables 7 and 11, Tables 8  and 12, Tables 9 and 13 to form four preference relations of  alternatives with respect to the criteria expressed by OPNs,  as shown in Tables 14, 15, 16 and 17. Let ω ci = [ω 1i, ω 2i, ..., ω ji ] are weights of the alternatives over the criteria C i . According to Tables 18, 19,  20,  Finally, we calculate the score W i of alternative A i . We check the consistency of the four preference relations and fix the inconsistent preference relations. Finally, we get the preference relations shown in Tables 18, 19, 20 and 21.  Table 14 The preference relation of alternatives with respect to the criterion C 1 Because s (W 1 ) > s (W 2 ) > s (W 3 ), so we can judge that the instance belonging to alternative A 1 (Iris Setosa), This is consistent with the actual situation, so we can think that the decision is accurate.
To further verify the correctness of the decision, we performed the Kolmogorov-Smirnov test (Wang and Wang 2010). We assumed that the test data differed in distribution from the data of the Iris Setosa category. By the Kolmogorov-Smirnov test for four attribute values, The p_values were obtained as 1.0, 0.8, 0.92, and 1.0, respectively. The original hypothesis was rejected at the significance level of α = 0.05, indicating that the test sample is not statistically different from the date of the Iris Setosa category.

Experiment case 2
We select the last instance of the second class from the iris data set for the experiment, judge the category of the instance through the method proposed in this paper, and compare it with the actual situation to see whether the algorithm proposed in this paper can give a reasonable conclusion. The value of this example is I 1 = [5.7, 2.8, 4.1, 1.3]. Table 18 The preference relation of alternatives with respect to the criterion C 1 (after repair)  Table 19 The preference relation of alternatives with respect to the criterion C 2 (after repair)   The preference relation of alternatives with respect to the criterion C 1 given by experts Table 23 The preference relation of alternatives with respect to the criterion C 2 given by experts Table 24 The preference relation of alternatives with respect to the criterion C 3 given by experts Firstly, four preference relations of alternatives with respect to the criteria are given by experts, the four preference relations are shown in Tables 22, 23, 24 and 25.
By comparing the value of I 1 with Average(A 1 ), Average(A 2 ) and Average(A 3 ), and through the algorithm in "The algorithm to get the preference relation between the alternatives and the criterion" section, we can get four preference relations of alternatives with respect to the criteria. The four preference relations are shown in Tables 26, 27, 28 and 29.  We combine Tables 22 and 26, Tables 23 and 27, Tables 24  and 28, Tables 25 and 29 to form four preference relations of alternatives with respect to the criteria expressed by OPNs, as shown in Tables 30, 31, 32 and 33.
We check the consistency of the four preference relations and fix the inconsistent preference relations. Finally, we get the preference relations shown in Tables 34, 35, 36 and 37.
Finally, we calculate the score W i of alternative A i .
Because s (W 2 ) > s (W 3 ) > s (W 1 ), so we can judge that the instance belonging to alternative A 2 (Iris Versicolour), This is consistent with the actual situation, so we can think that the decision is accurate.
To further verify the correctness of the decision, we performed the Kolmogorov-Smirnov test. We assumed that the test data differed in distribution from the data of the Iris Versicolour category. By the Kolmogorov-Smirnov test for four attribute values, The p_values were obtained as 0.84, 1.0, 0.76, and 1.0, respectively. The original hypothesis was rejected at the significance level of α = 0.05, indicating that the test sample is not statistically different from the date of the Iris Versicolour category.

Examples with experts give invalid or unreasonable judgments
Similar to the examples when experts give reasonable judgments in "Examples with experts give reasonable judgments" section, we compare the importance of the criteria to the overall goal. The preference relation of criteria concerning the overall objective that experts gave is shown in Table 38. It can be seen from Table 38 that the standard preference relationship related to the overall goal given by the experts is invalid (all values are 0.5), which cannot give decisions in the previous FAHP. However, in the method proposed in this paper, we can use the results given by machine learning algorithms to modify the results given by experts. The preference relation of criteria with respect to the overall objective calculated by the machine learning algorithm can be seen in Table 2.
We have synthesized the preference relations given in Tables 1 and 2, and obtained the final preference relation of criteria with respect to the overall objective which was expressed in OPNs. The final preference relation of criteria with respect to the overall objective is shown in Table 39.
We check the consistency of the preference relations in Table 39, and repair the inconsistent preference relations. The results after repair are shown in Table 40.
According to Table 40, we can use (10) to calculate the priority vector:   We also select the last instance of the second class from iris data set for experiment, the value of this instance is I 1 = [5.7, 2.8, 4.1, 1.3] (consistent with instance in section 5.2.2).
Firstly, four preference relations of alternatives with respect to the criteria are given by experts, the four preference relations are shown in Tables 41, 42, 43 and 44. Table 42 The preference relation of alternatives with respect to the criterion C 2 given by experts (invalid) A 1 0.5 0.5 0.5 A 2 0.5 0.5 0.5 A 3 0.5 0.5 0.5 Table 43 The preference relation of alternatives with respect to the criterion C 3 given by experts (invalid) Table 44 The preference relation of alternatives with respect to the criterion C 4 given by experts (invalid) A 1 0.5 0.5 0.5 A 2 0.5 0.5 0.5 A 3 0.5 0.5 0.5 From Tables 41 to 44, we can see all values are 0.5, the preference relation of alternatives with respect to the criteria given by expert are invalid.
The preference relation of alternatives with respect to the criteria calculated by algorithm are shown in Tables 26, 27,  28, 29. We combine Tables 41 and 26, Tables 42 and 27,  Tables 43 and 28, Tables 44 and 29 to form four preference  relations of alternatives with respect to the criteria expressed  by OPNs, as shown in Tables 45, 46, 47 and 48. We check the consistency of the four preference relations and fix the inconsistent preference relations. Finally, we get the preference relations shown in Tables 49, 50, 51 and 52.
Let ω ci = [ω 1i, ω 2i, ..., ω ji ] are weights of the alternatives over the criteria C i . According to Tables 49, 50, 51, 52, we can calculate that   Because s (W 2 ) > s (W 3 ) > s (W 1 ), so we can judge that the instance belonging to alternative A 2 (Iris Versicolour), This is consistent with the actual situation, so we can think that the decision is accurate. When experts gives wrong or invalid judgment, the experts' judgments are corrected by the machine learning algorithm, so that the correct conclusion is still drawn in the end.
To further verify the correctness of the decision, we performed the Kolmogorov-Smirnov test. We assumed that the test data differed in distribution from the data of the Iris Versicolour category. By the Kolmogorov-Smirnov test for four attribute values, The p_values were obtained as 0.84, 1.0, 0.76, and 1.0, respectively. The original hypothesis was rejected at the significance level of α = 0.05, indicating that the test sample is not statistically different from the date of the Iris Versicolour category.

Conclusion
This paper uses OPNs, a new fuzzy number theory, to improve the original fuzzy analytic hierarchy process, and proposes the FAHP with OPNs for the first time. The authors use the machine learning algorithm to improve the disadvantages that the original fuzzy analytic hierarchy process depends too much on the subjective judgments of experts, and avoid the decision-making errors caused by the deviation of experts' judgments to a certain extent. We show that the algorithm proposed in this paper can get reasonable results Table 49 The preference relation of alternatives with respect to the criterion C 1 (after repair) A 1 (0.5,0.5) (0.5,0.15740792) (0.5,0.99999292)  Table 50 The preference relation of alternatives with respect to the criterion C 2 (after repair) A 1 (0.5,0.5) (0.5,0.00000084) (0.5,0.00000236836964) A 2 (0.00000162350337,0.658897089) (0.5,0.5) (0.5,0.62110706) A 3 (0.00000260011426,0.523321519) (0.615613312,0.494180312) (0.5,0.5) Table 51 The preference relation of alternatives with respect to the criterion C 3 (after repair) A 1 (0.5,0.5) (0.5,0.0000002) (0.5,0.00000112541441)  through two examples. Finally, we use an example to show that when the expert can not give a correct (or effective) judgment, the method proposed in this paper allows the judgment given by the machine learning algorithm to correct the invalid judgment given by the expert. So that In the end, this method can still draw reasonable conclusions. The OFAHP can be used in scenarios where objectivity is required, and a small amount of previous data exists. For example, it is used for selecting commodity suppliers, selecting project implementation solutions, etc. The current limitation of the proposed method in this paper is that it requires a certain amount of previous data rather than the traditional hierarchical analysis, which only requires the judgment of experts. At present, the weight of subjective judgment given by experts and objective judgment given by the machine learning algorithm is the same. Future work could consider giving different weights to the subjective judgment given by experts and the objective judgment given by the machine learning algorithms. It could adapt to more complex situations in real society better. Data availability All data generated or analysed during this study are included in this article.

Conflict of interest
The authors declare that they have no conflict of interest.