A preference structure in multi-attribute decision making: an algorithmic approach based on hesitant fuzzy sets

This paper introduces a new methodology for solving multi-attribute decision making (MADM) problems under hesitant fuzzy environment. The uncertainty in hesitant fuzzy elements (HFE) is derived by means of entropy. The resulting uncertainty is subsequently used in HFE to derive a single representative value (RV) of alternatives in each attribute. Our work transforms the RVs into their linguistic counterparts and then formulates a methodology for pairwise comparison of the alternatives via their linguistically defines RVs. The Eigen vector corresponding to maximum Eigen value of the pairwise comparison matrix prioritizes the alternatives in each attribute. The priority vectors of the alternatives are aggregated to derive the weights of the attributes using Quadratic programming. The weighted aggregation of the attribute values provides the ranking of the alternatives in MADM. An algorithm is written to validate the procedure developed. The proposed methodology is compared with similar existing methods, and the advantages of our method are presented. The robustness of our methodology is demonstrated through sensitivity analysis. To highlight the procedure, a car purchasing problem is illustrated.


Introduction
Multiple attribute decision making (MADM) based on hesitant fuzzy sets (HFS) has attracted the attention of the decision-makers (DMs) and practitioners because of its wide range of applications in various fields of management (Chen and Hong 2014;Gou et al. 2017;Liao et al. 2020;Sellak et al. 2018). HFS is an extension of fuzzy sets where the membership functions of an element are characterized by multiple values. The present work introduces a new methodology for solving MADM based on HFS. The alternative assessment by DM usually fluctuates between several possible values when a clear and precise response of the alternative valuation is not provided. This is because the DM is hesitant and unable to provide a single numerical/linguistic assessment to an alternative. HFS is an appropriate tool to deal with these types of fluctuating situations (Chen and Hong 2014;Gou et al. 2017;Liao et al. 2020;Sellak et al. 2018;Torra 2010;Wang et al. 2015). For example, while evaluating the ''price'' of a car, a buyer may confuse and swing his/her assessments as somewhat ok, reasonably well, satisfactory price, etc. In numeric terms, the level of fluctuations may be (0.4, 0.6, 0.8). These types of obscurities on the part of the buyer or the DM are mainly because of the lack of knowledge or indecisiveness for attribute ''price''. Inspired by these challenging conditions in decision making, our work proposes a new methodology to appropriately select or rank the alternatives that are given as hesitant fuzzy values (HFVs), satisfying all the criteria efficiently. In recent years, some methods Jibin Lan 2017;Wang et al. 2015;Chen and Hong 2014) are provided for solving MADM problems based on HFS. Though the methods above provide light in dealing with MADM under HFS environment and obtain the desired solutions, there is still a shortfall in precisely ascertaining the inherent uncertainties in the HFS. Our proposed procedure attempts to address these deficits.
The uniqueness of the proposed work is to identify and alleviate the uncertainties in hesitant fuzzy values that are attributed to the alternative assessments. Our work proposes to use entropy for the purpose. The Entropy (Kosko 1986;Yager 1995) is deeply connected with the ambiguous behaviour of the DM, especially when he/she is hesitant in prescribing any single value and tempted towards multiple values in his/her alternative assessments. The proposed work determines the entropy prevalent in hesitant views of the DM. The entropies in each alternative over the attributes are subsequently integrated in the HFEs to obtain a single aggregated numeric value as RV. The numerically defined RVs are transformed into their linguistic counterparts as judging the alternatives in linguistic expressions is convenient and accepted in reality. The rationale behind the importance of linguistic expressions in decision-making is explained below.
In real-life decisions, the linguistic expressions are more favoured as they are similar to day-to-day language of human beings . Besides, in many situations the information about the alternatives cannot be assessed precisely in quantitative terms but in qualitative terms. For example, in a car purchasing problem, for an attribute ''colour'' or ''comfort'' a buyer may express his/her preferences more conveniently in linguistic valuations rather than in precise quantitative forms. In another instance, a buyer may conveniently express the car ''price as very high'' rather than a numerical term of ''0.2'' as low satisfaction. Therefore, it is desirable to assess the alternatives more on linguistic expressions as the single numeric value assessments may deviate from real-world decisions and cause loss of information. Additionally, the linguistic expressions are more pertinent and substantially compatible with the real-world decisions.
The other innovative idea of our work is to transform the RVs into their linguistic counterparts and to formulate a methodology for pairwise comparison of the alternatives via their RVs. The pairwise comparison of the alternatives lead to prioritization of the alternatives in each attribute.
The comparison of linguistically defined RVs in each attribute forms pairwise comparison matrices that are subsequently converted to fuzzy preference relation (FPR) matrices. The details about the FPR are found in Wang and Parkan (2005). The Eigenvector method (Wang and Parkan 2005) is used in fuzzy pairwise comparison matrices and modelled as a linear programming problem (LPP). The solution to LPP provides the ranking of the alternatives as ''priority vector''. In the light of the methodology in Xu et al. (2014), we used Quadratic Programming to aggregate the priority vectors in each attribute to obtain the final ranking of the alternatives in MADM problem.
Some other works are available in the literature of MADM under hesitant fuzzy environment. The work given in Wang et al. (2015) deals with the solution of multicriteria decision making (MCDM) problems under hesitant fuzzy linguistic term set (HFLTS). In the methodology, an outranking approach is given to solve the MCDM problems. The outranking approaches given in Sellak et al. (2018) are combined with HFLTS to solve MCDM problems. The work in Chen and Hong (2014) deals with HFLTS in MCDM and uses the confidence measure for its solution. Distance and similarity measures of HFS are derived and used to solve MCDM problems in Li et al. (2015). In this paper, the authors have taken into account both the values of hesitant fuzzy elements (HFEs) and their cardinalities to calculate the distance measure. The paper in Liao et al. (2018) converts quantitative data into hesitant fuzzy linguistic terms and uses the ORESTE method for solving MCDM problems. HFLTS in the context of MCDM is discussed in detail in Wei et al. (2014). In the work given in Wei et al. (2014), the authors have discussed two aggregation operators: LWA and LOWA for solving MCDM problems. Prospect theory and PROMETHE are used for solving MCDM based on HFS Peng et al. (2016). The correlation measures of HFLTS are applied for solving MCDM in Liao et al. (2020). Similarity and entropy measures along with an interval bound footprint for Hesitant Fuzzy Sets are given in MCDM framework in Hu et al. (2018). In Riera et al. (2015), a fuzzy decision-making model based on discrete fuzzy numbers is proposed for solving MCDM problems. In Qian et al. (2013), HFS is transformed into intuitionistic fuzzy sets and subsequently used to develop a decision support system for MCDM problems. Hesitant fuzzy linguistic Entropy and cross-entropy integrated with the queuing method is used to solve MCDM in Gou et al. (2017). Some other methods of HFS in MCDM are found in Liao and Xu (2016). In all the papers mentioned above, in some form or other, there are certain deficiencies mainly uncertainties in HFS and their integration in the decision process. Our paper addresses these deficiencies and attains to obtain a viable solution in MADM under hesitant fuzzy environment.

Challenges and gaps
1. To our knowledge, the determination of uncertainties in HFE incorporating both the number of terms (cardinality) and degree of membership-values of elements have so far remained gap in the literature. For example: Take a HFE containing a single element (x (0.3, 0.4, 0.6, 0.8)). The uncertainty here not only depends on the number of elements in HFE (cardinality is 4) but also on the membership values of the elements in HFE. Therefore, it is necessary to include both cardinality and degree of membership values while aggregating the elements in HFE to a single value of 'x'. The uncertainty determination and an aggregation operator identification for the valuation of HFE are two challenging tasks. 2. The aggregated HFE as RV and its transformation to equivalent linguistic counterparts after integrating the inherent uncertainty is very rare in the literature. Therefore, the process of linguistic transformation considering the above features is a motivating assignment. 3. The prioritization of alternatives especially when they are assessed in HFEs and their aggregation across the attributes is central problem in MADM. Therefore, the identification of an aggregation operator for combining the attribute values of alternatives is an essential task in MADM.

Motivations and contributions
The above-mentioned three problems motivate us to take a MADM problem with alternatives assessed as HFEs in each attribute. The other motivations in our work are identification and incorporation of the inherent uncertainties in alternative assessments and to incorporate them in the solution process of MADM.
1. In order to account the inherent uncertainty in HFE, the proposed work identifies entropy and applies it to aggregate the multiple membership values into a single RV. 2. The RVs in numeric numbers are suitably transformed to their linguistic counterparts. The transformation process takes into account the non-matching of RV values to any of the pre-defined basic linguistic terms and suitably creates a linguistic term with appropriate semantics. 3. The minimal weighted distance of the priority vectors as an aggregation operator is identified to aggregate the attribute values of the alternatives.

Structure of the paper
In Sect. 2, we have given the preliminary concepts that are used in our paper. In Sect. 3, we have explained the concept of entropy as uncertainty in HFS and its derivation. Further, in this section, we have incorporated the derived uncertainty and aggregated each element of HFE to obtain its RV. In Sect. 4, we have explained the conversion of RVs that are in numeric terms into their linguistic counterparts. In Sect. 5, we have formulated a linear programming problem to derive the priority vectors of the alternatives. In Sect. 6, quadratic programming is used to aggregate priority vectors over the attributes to obtain the final ranking of the alternatives for MADM. In Sect. 7, we have written an algorithm to describe our methodology. A numerical example is illustrated in Sect. 8 to highlight the procedure developed. In Sect. 9, we have compared our work with other similar works. This section also covers the discussion about algorithm results and sensitivity analysis. Conclusion and the scope for future research are given in Sect. 10.

Preliminaries
Hesitant fuzzy sets (Torra 2010): Let X = {x 1 , x 2 , …, x n } be a set containing 'n' number of elements. Fuzzy entropy (Kosko 1986): Let X = {x i , l(x i )} be a fuzzy set with membership values as l(x i ) (i = 1, 2, …, n). The nearest distance of the element (x i , l(x i )), d N (x i ), from a non-fuzzy point is defined as follows: Similarly, we can have the farthest distance of (x i , l(x i )), d F (x i ) from a non-fuzzy point as: The entropy, E(X), of fuzzy set X is defined as Virtual linguistic term: Let S = {s 0 , s 1 …, s g } be a basic linguistic term set BLTS. We can define the semantics of a virtual linguistic term s 3.4 (3.4 \ g and s 3.4 6 2 S) as a fuzzy number as shown below:

Entropy in hesitant fuzzy sets
Decision making, in general, is often under uncertain environment. Imperfect information or fuzziness in alternative assessments is many a times cited as entropy in the system. In essence, entropy measures the degree of uncertainty associated with HFS or fuzzy messages in HFEs. Following the methodology given in Kosko (1986), our paper proposes l p -distance (Hamming distance when p = 1) between HFEs and its nearest non-fuzzy elements to identify the entropy in the structure. The measure of entropy is the ratio of nearest distance to the farthest distance of HFEs to their non-fuzzy points. The procedure is as given below: [ as fuzzy messages and taking h s and h s as nearest and farthest non-fuzzy messages from the HFE, we have the fuzzy entropy of \x i , h s (x ij )[ as: where l p is the distance between HFE h s (x ij ) and h s as: where L i is the cardinality of HFE \ x i , h s (x ij )[. Similarly, we can have l p -distance between HFE h s (x ij ) and h s as: The entropy R p (h s (x ij )) has the following properties: (1) R p (h s (x ij )) is strictly increasing in [0, 0.5] and strictly decreasing in [0.5, 1].
Our work interprets the entropy R p (h s (x ij )) as the measure of uncertainty or risk associated with HFE \x i , To find the RV of \x i , h s (x ij )[, it is necessary to identify an aggregation operator that not only aggregates the elements h s (x ij ) (j = 1, 2, …, L i ) but also assimilates the entropy R p (h s (x ij ) in the aggregation process. The procedure for deriving RVs is given below: If c i is the risk-taking ability, or compensatory behaviour of the DM for the HFE, we have, the entropy of the ith alternative (taking p = 1) as: The details about the compensatory aggregation operator are found in Rao et al. (1988) and Zimmermann (1978).
In the above equation, if a DM is pessimistic (fully riskaverse person) c i = 0 and RV i = min j x ij À Á . This indicates the DM is pessimistic or non-compensatory and prefer the minimum value of HFE as their decision. Similarly, in the case of an optimistic person we have c i = 1 and RV i-= max j ðx ij ÞÞ. However, in real situations, the decisions are neither fully compensatory (optimistic) nor non-compensatory (pessimistic) but compensatory depending on the mind-set of the DM. Our paper considers the risk-taking ability c i of the DM, for HFE h s (x ij ) as: Equation (3.4) represents the certainty factor in HFE h s (x ij ). In other words, the risk-taking attitude or compensatory behaviour of the DM is nothing but the amount of safe bet involved in the HFE h s (x ij ).

The linguistic equivalent of hesitant fuzzy sets
In our daily life, very often, precise or quantitative information cannot be stated conveniently. For example, the colour or comfort of a car may be more suitably stated in linguistic terms as good colour or excellent comfort level, etc. In MADM problems, alternative evaluations such as 'reasonably good', 'poor', 'excellent' are more conveniently expressed linguistically in comparison with their numerical counterparts. Therefore, it is desirable to use linguistic expressions in real-world decision-making problems to make it more genuine and analogous to human decision making. To express the alternative assessments in MADM in linguistic terms, our work uses ordered linguistic terms called Basic Linguistic Term Set (BLTS). The cardinality of the linguistic terms in BLTS is dependent upon the granularity of uncertainty involved. Our work uses fuzzy numbers to define the semantics of the linguistic terms in BLTS.
The RVs of HFEs in Eq. (3.3) that comprise the DM's risk outlooks are in numerical terms. The theory of fuzzy sets is used to transform the numerically defined RVs into their linguistic counterparts. Following the procedure given in , the alternatives are compared pairwise using their linguistically defined RVs to obtain the preference of an alternative over the other. The methodology is explained in the following steps: Step 1: Take the BLTS = {s 0 , s 1 … s g }, consisting of (g ? 1) basic linguistic terms. Assume their semantics as shown in Fig. 1a. The graphical representations of the semantics are shown in Fig. 1b.
Step 2: Take (A ij ) be the assessment of the ith alternative in the jth attribute in HFE. Using Eqs. (3.1) and (3.3), derive the RV ij = a ij 2 [0, 1] j .
Step 3: Derive the linguistic counterpart of a ij as s / ij using the following steps: Step 3.1: Use the procedure given in Wang et al. (2015) Step 3.2: If s / ij completely matches a linguistic term in BLTS, i.e. s / ij 2 BLTS, we got the linguistic equivalent of a ij as s / ij otherwise go to next step.
Step 3.3: Let s t B s / ij B s t?1, s t, s t?1 2 BLTS. Take the semantics of s / ij as triangular fuzzy number (s / ijL ; s / ijM ; s / ijR ). Thus, we have: s / ijM is a point in the domain of fuzzy number s / ij with: Step 4. Find the similarity degree of s / ij with each s t -BLTS using Eq. (4.5) below: Step 5. If s / ij and s / kj are the fuzzy numbers representing the linguistic equivalents of the RVs of ith and kth alternative in jth attribute, we have the degree of superiority of the ith alternative over the kth alternative in jth attribute as shown below Step 6: Form the pairwise comparison matrix of the alternatives in jth attribute as shown below: The pairwise comparison matrix in Eq. (4.7) is converted to a fuzzy preference relation matrix of the alternatives in the next section.

Determination of the priority vector of alternatives
In real-world situations, it is difficult to identify the prioritization amongst the alternatives especially when their evaluations are based on multiple attributes and assessed as HFVs. Several methods are available to elicit the ''priority vector'' of the alternatives from FPR. They are the Eigenvector method (Wang and Parkan 2005), normalizing rank aggregation method (Xu et al. 2009), logarithmic least square method (Crawford and Williams 1985;Bozoki and Fulop 2018), etc. To obtain the ''priority vector'' of the alternatives, first, it is necessary to transform the matrix in Eq. (4.7) to a fuzzy preference relation matrix (Wang and Parkan 2005). The pairwise comparison matrix, in Eq. (4.7), is reproduced in Eq. (5.1) below: In order to transform A j into the fuzzy preference relation matrix, it is necessary that the matrix A j to satisfy the following conditions: To satisfy Eq. (5.2), the entries of the matrix s rkj in A j are transformed into p rkj according to Eq. (5.3) given below.
Thus, we have obtained the fuzzy preference relation matrix as given below: Using the procedure given in Wang and Parkan (2005), if (w 1 , w 2 ,…, w m ) be the weight of the alternatives A 1 , A 2 , …, A n with w i C 0 (i = 1, 2,…, m) and P m i¼1 w i =1, in attribute j (j = 1, 2 … n), then we have where e -= (e 1 -, e 2 -… e m -) and e ? = (e 1 ? , e 2 ? …, e m ? ). Equation (5.7) can be expressed as a linear programming problem as shown below.
The solution to the above LPP gives us the ''priority vector'' of the alternatives in the jth attrite. Let the solution be . . .d j m Þ: ð5:9Þ d j i represents the assessment of ith alternative in the jth attribute (i = 1, 2, … m, j = 1, 2 …. n). Now, the problem is how to aggregate the ''priority vectors'' across all the attributes to arrive at a final ranking of the alternatives in MADM. The next section deals with this problem.

Decision function based on the distance aggregation method
To rate an alternative, it is necessary to aggregate its values in each attribute using an aggregation operator. Several methods are available in the literature on aggregation operators. To name a few, these include MIN operator (Zimmermann 1978), compensatory operators (Rao et al. 1988), Utility theory-based methods (Cohon 2004), weighting methods (Cohon 2004), OWA operators (Yager 2004), IOWA operators (Yager 2003), etc. In all these methods, the aggregation operators aggregate the attribute values and finally coincide with a single aggregation point, representing the aggregated value of the alternatives. Following the work in Xu et al. (2014), our work proposes a distance-based aggregation approach to minimize the distance between the weighted attribute values of the alternatives across the attributes to reach a consensus amongst the attributes. The point of consensus indicates the maximum agreement amongst the attributes. The procedure is described in the following steps: Step 1: Let (d j 1 ; d j 2 ; . . .d j m Þ is the priority vector of the alternatives (A 1 , A 2 … A m ) for the jth attribute C j (j = 1, 2…,n). In the matrix form, we have Step 2: Assume k j as the weight of the attribute C j (j = 1, 2 … n). The quadratic programming problem below determines the attribute weights k j .

Min
In the above, k k d k i represents the weighted evaluation of the ith alternative in kth attribute.
Step 3: Solve the quadratic programming in Eq. (6.2) and obtain the solution as (k Ã 1 ; k Ã 2 . . .k Ã n Þ. The solution is global minimum as the Hessian matrix H{$(k 1 ; k 2 . . .k n Þg corresponding to the quadratic programming problem is positive definite (Xu et al. 2014). This is because the alternatives taken in our paper do not have the uniform evaluation across the attributes, i.e. there exists attributes C k and C l (k = 1, 2, …, n) satisfying d i k = d i l .
Step 4: The weighted average of the alternative A i (i = 1, 2…, m) gives the rating of the alternative A i as shown in the equation below: :3Þ Step 5: As the values of R (A i ) are in numerical terms representing the rating of A i , the alternatives can be ranked according to their order of preference.

HFS_Ranking()
In this section, an algorithm HFS_Ranking() is written to explain the procedure developed. The algorithm takes a set of inputs such as m, n, q ij (i = 1, 2, …, m; j = 1, 2, …, n) representing the number of alternatives, number of attributes, and cardinality of HFE of ith alternative in jth attribute as HFE. HFS_Ranking() outputs FPR. Subsequently, the FPR acts as an input to HFS_LINGO_LPP() and obtains the priority vectors W j *-= (d j 1 ; d j 2 ; . . .d j m ÞVj = 1, 2, …, n as output as shown in Fig. 2. W j * (j = 1, 2, …, n) again becomes an input to HFS_LINGO_QPP() that gives the final ranking of alternatives to MADM as output.

FINAL OUTPUT
•Ranking of i th alternaƼve R ( ) = 1 1 + 2 2 + ⋯ + Steps 1 and 2 of the algorithm take the alternative assessments in HFE in each attribute. Steps 3 and 4 calculate the nearest and farthest distances of HFE from the non-fuzzy points.
Step 5 derives the Entropy corresponding to HFE.
Step 6 derives the certainty factor or risk resilience that is inherent in HFE.
Step 7 aggregates elements in HFE to obtain RV. Steps 8-14 explain the conversion of RV ij s to their linguistic counterparts s / ij : Further, in this step, it is explained the representation of s / ij as fuzzy numbers with the semantics. Steps 15-22 derive the similarity degree of s / ij with each linguistic term s t 2 BLTS. Steps 24-27 define the pairwise comparison of the alternatives showing the preference of one alternative over another. Steps 28-30 provide the FPR of the alternatives.
Step 31 takes the FPR as an input to HFS_LINGO_LPP() and obtains the priority vector (d j 1 ; d j 2 ; . . .d j m Þ of alternatives corresponding to jth attribute. The input of priority vectors to HFS_LIN-GO_QPP() provides the ranking of the alternatives as outputs in MADM as shown in Steps 32-34.

Numerical example
Take a Car purchasing problem. Let a buyer desires to have the attributes (1) Price, (2) Maintenance Cost, and (3) Mileage. Consider five models of alternative cars in the market. Each car is assessed on the attributes: Price, Maintenance Cost, and Mileage as HFEs shown in Table 1. We require to select the best car or to rank the available cars according to the buyer's preferences.
The entropy as uncertainty in the buyer's assessments is derived for each car using Eq. (3.1) and taking p = 1 as the hamming distance. For example, take the car P 1 and its attribute value in ''price'' (0.6, 0.4, 0.1). The entropy corresponding to P 1 in the ''price'' can be calculated as:
Similarly, for other cars, we have the RV ij s as shown in the first part of the entries in Table 3. The second part indicates the linguistic counterparts of RV ij s.
Take car model P 1 and the attribute ''price''. Using Eq. (4.1), we have (t/g) = 0.386. From the BLTS and its semantics as shown in Fig. 1, we have g = 8. Thus, we have s t = 3.088 & 3.1. This implies the linguistic equivalent of 0.386 is s 3.1 . Similarly, we can have other RV ij s and their linguistic counterparts as shown in Table 3.
Using Eqs. (4.2), (4.3), and (4.4), the semantics of the linguistic terms as fuzzy numbers are shown in Table 4. For  The semantics for other linguistic terms are calculated similarly, and they are shown in Table 4.
The similarity degrees of RV ij s (Eq. 4.5) with the linguistic terms in BLTS in the attributes Price, Maintenance Cost, and Mileage are, respectively, shown in Tables 5, 6, and 7.
Using Eq. (4.6) and (4.7), we have the pairwise preference matrix of the cars in each attribute and they are shown in Tables  þ e À 4 À e þ 4 ¼ 0   Similarly, we used linear programming for the attributes ''Maintenance Cost'' and ''Mileage'' to obtain the ''priority vectors'' as shown below:     The priority vectors are shown in Table 14. Now, using Eq. (6.2) quadratic programming is applied to aggregate the ''priority vectors'' over the attributes to determine the final ranking of the cars for MADM. Taking k 1 , k 2 and k 3 as the weights of the attributes price, maintenance cost, and mileage, we have Min Z ðk 1 ; k 2 . . .k n Þ ¼ 0:836k 2 1 þ 0:452k 2 2 þ 0:4k 2 3 À 0:385k 1 k 2 À 0:399k 1 k 3 À 0:4k 2 k 3 The matrix is positive definite. This clarifies the function $ is a convex function, and hence, the solution obtained is a global minimum. From the ratings, we have the preference ranking of the cars: In the above ranking, the Car model P 3 is selected as best choice. This is reasonable as from Table 3, the product has comparatively high linguistic value, more RV (confidence) and less uncertainty in the ''mileage attribute'' that is derived as most important as per the buyer's preference. In the similar way, the Car model P 2 is chosen as a last choice as RV is relatively less and the degree of uncertainty is somewhat high in the attribute mileage. From the results, the methodology is coherent and one can observe that the product ranking through our proposed procedure not only depends on their RV or linguistic values across the attributes but also on the importance attached to the attributes.

Results and discussion
The algorithm HFS_Ranking() is experimented using synthetic data sets of attributes consisting of different sets of alternatives to verify the validity of the algorithm. The experimental results obtained are satisfactory as far as time complexity is concerned. We have also compared our work with other similar types of works in this section.

Experiments with synthetic data sets
The algorithm HFS_Ranking(), proposed in our paper, is written and implemented using 'C??'. To begin with, HFS_Ranking() takes the input (m, n, q ij ) (i = 1, 2,…, m; j = 1, 2,…, n) representing m number of alternatives, n number of attributes, and q ij cardinality of HFEs (ith alternative in the jth attribute). Our algorithm takes 3-15 number of attributes and 10 sets of alternatives of different sizes corresponding to each attribute. The size of the alternatives varies from 10 to 100 in the attributes. In each case, the Average Run Time (in seconds) is shown in Fig. 4. From the figure, it is found that the Average Run Time curve is almost flattened tending to roughly linear and the trajectory does not increase exponentially even with the large data sets. At the bottom of the curve in Table 13 Fuzzy preference relation of Cars for the attribute 'Mileage' Mileage P 1 P 2 P 3 P 4 P 5 P 1 2 0.5 0.5 0.5 0.5 P 2 0.5 2 0.5 0.5 0.5 P 3 0.5 0.5 2 0.5 0.5 P 4 0.5 0.5 0.5 2 0.5 P 5 0.5 0.5 0.5 0.5 2

Sensitivity analysis
The sensitivity analysis is applied in the numerical example given in Sect. 8 by changing p values in l p -distances (Eq. 3.2.). The results are summarized in Table 15. The   evaluation results in product ranking are identical for various p values in the l p metric.
In the second case, we changed some product attribute values from initial data (Table 1) and the changed input  table is shown in Table 16. The changed data are in italics.
Solving the problem after taking the data from Table 16, we have obtained the product rankings for various p values in the l p metric and are shown in Table 17.
From the sensitivity analysis results in Tables 15 and 17, the product rankings are identical in each case, verifying that our methodology is robust and independent of the distance measures on buyer's varying input data.

Comparison with other works
To exhibit our method's suitability and rationality, we compare the proposed procedure with other similar works (Yang and Hussain 2019;Alcantud et al. 2016;Chen and Hong 2014;Wang et al. 2015) in Sects. 9.3.1,9.3.2,and 9.3.3. 9.3.1 Comparison with (Yang and Hussain 2019) and (Alcantud et al. 2016) Our procedure is compared with the methodologies given in Yang and Hussain (2019) and (Alcantud et al. 2016). Certain shortcomings in these works are identified, and required improvements are made in our method. Using the methodologies of Yang and Hussain (2019) and Alcantud et al. 2016) and taking the buyer's data from Table 1, the product rating and ranking are obtained in Table 18. Graphically, the ranking and rating are shown in Figs. 5 and 6. From Table 18, the product ranking of the proposed method is identical with the rankings obtained in the works (Yang and Hussain 2019) and (Alcantud et al. 2016). However, there are variations in product ratings. The variations may be attributed to certain new aspects that we have undertaken in our work, and they are listed below.
(i) In Yang and Hussain (2019), the alternative having least distance from the ideal point or the longest distance from the anti-ideal is taken as the best alternative. The procedure has used Hausdorff distances due to its advantages in computational complexity and rationality. However, the disadvantage is the ignorance of cardinalities of HFEs that are vital for uncertainty dimension, an essential component for distance measurement. For example, using Hausdorff measure, the distances of two HFEs from the ideal point are d((0.6, 0.4, 0.1), 1) = 0.9 and d((0.9, 0.7, 0.6, 0.4, 0.1), 1) = 0.9. Logically, the HFE with less uncertainty should be closer to the ideal point than the HFE with the more degree of uncertainty. With this argument, the distance relation should have been d((0.6, 0.4, 0.1), 1) \ d((0.9, 0.7, 0.6, 0.4, 0.1), 1), as the degree uncertainty  Fig. 6 Comparison of product rating of proposed method with other works in former HFE is lower than the degree of uncertainty of the later. As both the HFEs are equidistance from the ideal point, it is somewhat inconsistent to assume that two products (represented as HFE) with different degrees of uncertainty will preserve the same preference ranking in product evaluation. Our paper considers the cardinality in HFE and removes this gap by articulating and incorporating the uncertainty using entropy. (ii) The criteria weight calculation is somewhat subjective. This is because the work considers only minimum and maximum elements of HFE for weight calculation. Our work removes this subjectivity by objectively deriving the weights to the attributes.
In Alcantud et al. (2016), an innovative methodology is worked out to rank HFS. The methodology takes different scoring procedures of HFEs to rank the alternatives. Based on the score values of HFEs, a matrix C = c ij À Á kxq is derived; k and q, respectively, represent the number of alternatives (options) and number of attributes (characteristics). The Eigen vector corresponding to maximum Eigen value of the matrix C gives the rating and ranking of the products. The matrix entries c ij ; i = j are the number of attributes m for which t imt jm [ 0. t im represents the score of the ith alternative in mth attribute. Thus, we have c ij ¼ P q m¼1 y m : Where y m is a binary variable, y m = 1 when ðt im À t jm [ 0Þ and y m = 0 otherwise. The shortcoming is: (1) While comparing alternatives pairwise, attribute additions are done even if they are in different dimensions.
(2) If t im À t jm [ 0, it is counted as an added attribute even if the difference is as small as say, 0.001 (i.e. t im À t jm ¼ 0:001).
Our paper eliminates the above-mentioned issues by considering the alternatives' ratings and deriving attribute weights to address dimensionality. In Sects. 9.3.2 and 9.3.3, we have compared our method with works given in Chen and Hong (2014) and (Wang et al. 2015). We proceed to perform only theoretical comparisons. The comparison with respect to numerical results cannot be made as in both the cases the input data are in HFLTS, whereas in proposed work, the input data are in numerical HFS. 9.3.2 Comparison with the work given in Chen and Hong (2014) In the work given in Chen and Hong (2014) while deriving the aggregation. (iii) The level of confidence is defined in the paper as a-cut subjectively. The subjectivity of the DM may create some amount of biasness to obtain the alternative rankings in MADM. (iv) Another deficiency in the paper is the ranking of the alternatives based on either pessimistic or optimistic nature of the decision-maker. However, rarely, the DMs are either pessimistic or optimistic but often remain in between.
Our paper removes the above limitations and arrive at a viable solution in the following way: (i) The aggregation procedure of HFEs is designed in our paper in such a way that it not only accounts for all the members of the HFEs but also aggregates to RVs that represent the original HFE assessment. (ii) The implicitly defined uncertainty in the HFEs is properly explicated in our work by using entropy. (iii) Our work replaces the subjectivity by objectively deriving the level of confidence as a certainty factor before arriving at the final ranking of the alternatives in MADM. (iv) Our paper takes this aspect of DM's behavioural attitude by deriving the weights of the attributes and arrives at all types of solutions inclusive of optimistic and pessimistic. 9.3.3 Comparison with (Wang et al. 2015) In the work given in Wang et al. (2015), outranking approach for solving MCDM problems based on HFLTS is given. This procedure uses directional Hausdorff distance, D hdh between two HFLTS to determine the dominance of one alternative over the other.
(i) According to the procedure given, the D hdh is derived as shown below: In the above distance measure, to have the minimum value of the D hdh (H 1 s ; H 2 s ), we need to have a maximum value of f s j À Á when h 1 sþ 6 ¼ h 2 sþ and the minimum value of f s i ð Þ when the condition h 1 sþ 6 ¼ h 2 sþ is not satisfied. This results in the participation of one single linguistic term of HFLTS in either H 1 s or H 2 s , resulting in the non-involvement of other terms in HFLTS. This is inconsistent as all other members of HFLTS are not accounted for. (ii) In the expression, the Hausdorff distance D hdh (H 1 s ; H 2 s ) represents the preference level of one alternative over another. The level of preference increases when the cardinality of H 1 s (orH 2 s ) is higher and it decreases when the cardinality is lesser. However, the higher level of cardinality leads to more uncertainty in HFS. This shows alternatives with higher preference have more uncertainty leading to an inconsistency in calculating the preferences in the decision-making system.
Our work removes these deficiencies as explained below: (i) The first shortcoming is addressed by accounting all the terms in HFEs and explicating the implicitly defined uncertainty in HFE using the concepts of Entropy. (ii) Our work removes the second gap by aggregating and counting all the members of the HFEs corresponding to the alternatives after deriving and integrating the inherent uncertainties in HFEs.

Conclusion
The work in this paper provides a novel procedure for solving MADM problems under hesitant fuzzy environment. The contribution of the paper is mainly to evaluate the alternatives that are assessed as HFEs by accounting the prevalent uncertainties. This is done by deriving the entropy in attribute values corresponding to the alternatives. The linguistic counterparts of aggregated alternative values (RVs) in each attribute is used to construct FPR matrix. The Eigenvector corresponding to maximum Eigen value of FPR prioritizes the alternatives in each attribute as priority vectors. Further, the proposed method minimizes the weighted distance amongst the priority vectors to derive the weights of the attributes. The method derives the ranking of the alternatives in MADM through the weighted aggregation of attribute values. We should point out here that the proposed methodology presents some advantage in comparison with other models, as in our work, the final ranking of the alternatives takes into account the multiple factors such as prevalent uncertainty due to hesitancy, linguistic interpretation of the alternatives, and prioritization of the alternatives as per the DM's choice. Our work uses FPR to represent the pairwise preference amongst the alternatives. However, some other preference relations depending on the prevailing situation may be more relevant. Finding the pairwise preference relation may be the scope for further research. The proposed work uses the distance aggregation approach to aggregate the priority vectors. Other aggregation operator based on the hesitant mind-set of the decision-maker possibly be more suitable for aggregation.