An efficient composite cloud service model using multi-criteria decision-making techniques

Recent advancements in information technology have made cloud computing one of the most prominent technologies. It is most favorable for the bundle of services that it provides to its users. Since there is a wide range of cloud service providers (CSPs) with various services, it is challenging for the user to select a CSP that can meet all of its requirements. In this paper, we propose a composite cloud service model, which is handled by a cloud agent, to identify the best cloud services/criteria from a set of CSPs by considering the objective and subjective opinions collected from the cloud users’ feedback and reviews. Note that the cloud agent is an intermediary between the users and CSPs. Then the agent recommends the CSPs to assemble the identified services into a unified group of services to fulfil the user requirements. Our model calculates the integrated objective and subjective scores of alternatives for a set of criteria and determines the best alternative for each criterion. For this, the application of two multi-criteria decision-making techniques, namely method based on the removal effect of criteria and extended step-wise weight assessment ratio analysis (extended SWARA), is used to calculate the objective and subjective scores, respectively. The proposed model is compared with the analytic hierarchy process-technique for order of preference by similarity to ideal solution, TOPSIS-VlseKriterijuska Optimizacija I Komoromisno Resenje (VIKOR), and SWARA-VIKOR to show its effectiveness.


Introduction
Over the past few years, cloud computing has proven to be a powerful technology in delivering IT services [1][2][3]. It uses the Internet to provide its users with elastic and large-scale IT resources. According to the National Institute of Standards and Technology (NIST), it is a model that facilitates the easy and rapid use of on-demand services from a shared pool of configurable computing resources with minimal management effort [4,5]. In addition, it provides a platform that can be used for deploying and developing applications. Many commercial CSPs, such as Amazon, Microsoft, and Windows, provide the cloud infrastructure to users with a wide range of services.
Since many CSPs offer similar services, the users have a wide array of options for selecting the best cloud service that meets their quality of service (QoS) requirements. The challenge is to choose the best CSP that can fulfil all the requirements and objectives of the users, and it is also a well-known NP-hard problem [6]. To deal with this challenge, MCDM techniques are used to determine the best alternative by evaluating multiple criteria and alternatives to reach a final decision [7]. It is noteworthy to mention that MCDM is a tool that deals with several conflicting and non-conflicting criteria and objectives. Moreover, it helps in selecting, sorting, and prioritizing the alternatives and assists in the overall assessment of the given situation [8][9][10][11].
In this paper, we propose a model that recommends the CSPs, through a cloud agent, to build a unified group of services. These services contain almost the best criteria values as evaluated from the opinion of the cloud users. Here, the cloud agent is a system designed to make a decision and figure out what can be done to fulfil the desired objectives of the cloud users and the CSPs [12]. In our model, the role or objective of the cloud agent is to discover the best services from a given set of CSPs considering the subjective and the objective opinions of the cloud users [13]. These opinions are collected from feedback, reviews, and questionnaires. Then the cloud agent recommends the CSPs to put together the selected services from the various CSPs and form a unified group of services that can be delivered to the cloud users. It can also help the cloud user to get the composite services that can be a favorable option for the users without further research. Let us consider a real-life example. If a tour and travels portal site can recommend a pre-constructed package for its tourists, which is previously analyzed and designed using the customer's experience, feedback, and review, then the tourists need not do any future research to go with the recommended package.
The main contributions of our paper are as follows.
1. We develop a CCS model using MCDM techniques to provide a unified group of services. These services outperform all other services in all the criteria values. 2. The proposed model considers both the objective and subjective opinions of cloud users. The objective score of the alternative is calculated using MEREC [14], and the subjective score of the alternative is calculated using extended SWARA [15]. 3. We integrate the objective and subjective scores, and the rank of the alternative for each criterion is computed from the calculated score. Then, a unified group of services is constructed by taking the criteria values of the highest-scored alternative.
The remainder of this paper is organized as follows. First, the related work is presented in Sect. 2. Then, the proposed model is presented in Sect. 3. Next, Sect. 4 describes our implementation using a case study. Finally, Sect. 5 concludes this paper.

MCDM algorithms for selecting CSP
Kumar et al. [16] have designed a cloud service selection model using AHP and TOPSIS in a fuzzy environment. AHP is used to structure the service selection problem. It is also used for the pairwise comparison of the criteria to determine the weight of the criteria using triangular-fuzzy numbers. On the contrary, TOPSIS is used for the final ranking of the CSPs. Jatoth et al. [18] have proposed a hybrid MCDM model for selecting cloud services among several alternatives. They have considered quantifiable/objective criteria for their evaluation. They have integrated the extended grey TOPSIS method and AHP to calculate the alternatives' rank. In addition, they have conducted a sensitivity analysis to demonstrate the strength of their model. In order to determine the trustworthiness of CSPs, Sidhu and Singh [17] have designed a multidimensional trust assessment scheme using MCDM algorithms. Here, the trustworthiness is calculated from the degree of conformity with the services offered in the service level agreement (SLA) by the CSP. To perform a comparative analysis, they have presented three techniques, namely AHP, TOPSIS, and preference ranking organization method for enrichment evaluation (PROMETHEE). Rai and Kumar [19] have presented a novel method for cloud service selection, which is ranked the CSP on a daily basis. They have used TOPSIS and VIKOR in the selection process. TOPSIS is used to find the positive and negative ideal distance from the solution and sort them accordingly. Finally, VIKOR is used to rank the alternatives by calculating the utility and the regret measure. Akbarizade and Faghihi [20] have proposed a hybrid MCDM model for ranking CSPs using SWARA and VIKOR. They have collected some of the decision-making factors from the literature and considered the decision-makers (DMs) opinions to get information about criteria and alternatives. Note that a DM refers to a person or group of persons responsible for making a strategic decision. We have assumed equal weight to each DM for the simplicity of our proposed model. Subsequently, they calculate the criteria weight and the subcriteria using SWARA. Finally, TOPSIS is used for ranking the CSPs. Saha et al. [21] have proposed a hybrid MCDM algorithm using analytic network process (ANP) and VIKOR to make cloud service selection by considering both beneficial and non-beneficial criteria. ANP categorizes these criteria into four subnets, namely benefits, opportunities, costs, and risks, and calculates the local rank of the alternatives. Finally, the global rank of alternatives is calculated by VIKOR. They have shown the stability and robustness of the algorithm using sensitivity analysis.

Objective and subjective weighting techniques
In the MCDM techniques, assigning weight is an essential part of the process. It reflects the relative importance or priority of the criteria and can significantly affect the final evaluated value. Several approaches [13,15, have been developed for determining the criteria weight. There are three categories of weighting techniques, namely subjective weighting, objective weighting, and hybrid weighting [13].
In subjective weighting techniques, the weight is determined using DMs opinions. The weight reflects the preference and the subjective view of DMs. Generally, the DMs express their judgment based on questionnaires and linguistic terms. Some of the subjective weighting techniques are simple multi-attribute rating technique (SMART) [30], AHP [31], SMARTS [32], Delphi method [33], Simos procedure and revised Simos procedure [34], SMARTER [35], ANP [54], superiority and inferiority ranking (SIR) [36], SWARA [29], factor relationship (FARE) [37], decision-making trial and evaluation laboratory (DEMATEL) [38], Kemeny median indicator ranks accordance (KEMIRA) [39], best-worst method (BWM) [40], integrated determination of objective criteria weights (IDO-CRIW) [41], criteria impact loss (CILOS) [42] and extended SWARA [15]. In our model, we use the subjective weighting technique to determine the subjective score of alternatives. There are some disadvantages of subjective weighting. First, it may be timeconsuming if there is a disagreement between the DMs. As it includes mental tasks, the judgment may not be accurate. Second, it may not be efficient when the number of criteria increases and the DMs lack experience and have limited capability for analyzing the criteria. As a solution, the objective weighting method may be useful.
The criteria weight is calculated using a specific computational process on a given decision matrix in objective weighting techniques. Here, there is no involvement of the DMs in assigning their preference. Some of the objective weighting techniques are the entropy method (Shannon's entropy method) [33], linear programming techniques for multidimensional analysis of preference (LINMAP) [43], weighted least-square method [44], criteria importance through inter-criteria correlations [45], digital logic and modified digital logic methods [46], adjustable mean bars (AMB) [47], direct weighting method, compromise programming technique [48], correlation coefficient and standard deviation (CCSD) [49], projection pursuit 1 3 algorithm [50], principal component analysis [51], mean square deviation method [52] and Bayes approach [53]. In our model, we use the objective weighting technique to determine the objective score of alternatives. The disadvantage of the objective weighting techniques is as follows. It does not consider the experience and expertise of the DMs. Therefore, many researchers have suggested using integrated or hybrid weighting techniques to overcome the disadvantages and to achieve more accurate results [13]. This paper is an attempt towards the same.
Radulescu and Radulescu [13] have reviewed various objective and subjective weighting techniques and proposed a hybrid group decision support method for assigning weight to the criteria of CSPs. They have combined the decision-making trial and evaluation laboratory and Shannon methods for subjective and objective weighting of the criteria, respectively. A service selection brokering model is proposed by Chauhan et al. [22] that integrates subjective and objective weighting approaches for cloud service selection. The subjective opinion is collected to calculate the subjective weight, while the objective is calculated from cloud service benchmark data. Moreover, user preference and feedback are used for subjective weight calculation, and objective weight is calculated using Shannon's entropy method. Zolfani et al. [15] have proposed a subjective weighting technique, where they have extended the MCDM algorithm, SWARA [29], to improve the criteria prioritization involved in the process of service selection. In addition, they have incorporated the reliability of the evaluation of the DMs ideas to improve the quality of the decision-making process.
Ghorabaee et al. [14] have introduced a new objective weighting method, called MEREC, to validate the efficiency of their proposed method. The authors have presented a set of computed analyses and used an illustrative example to demonstrate the calculation steps. Furthermore, MEREC is compared to other MCDM algorithms to determine its stability. Wang and Lee [23] have proposed an innovative approach using TOPSIS in a fuzzy environment by integrating the subjective and objective weights that involve the user's opinion in the decision-making process. First, a scale is created by normalizing the subjective weights assigned by individual DM. Then, the entropy theory is used to determine the objective weight from the user ratings. Finally, they have computed the closeness coefficient to determine the rank of the alternatives by calculating the ideal and negative ideal distance of the solution.

Cloud service composition
Vakili and Navimipour [24] have performed a systematic and comprehensive review in the field of service composition based on cloud computing. They have provided an overview and survey of the challenges associated with the composition of cloud services. They have also reviewed some of the existing cloud composition techniques and methods. Finally, they have outlined the key areas that require future research and improvement. Lahmar and Mezni [25] have proposed an approach that is concerned with the security-aware issues in multi-cloud service composition. They have combined fuzzy formal concept analysis (FCA) and rough set (RS) theory. The approximation property of RS and fuzzy relation of fuzzy FCA is An efficient composite cloud service model using multi-criteria… utilized to ensure the high level of security of the selected services and the hosting clouds. Their approach claims to eliminate insecure services, disqualify clouds, and reduce search spaces.
Barkat et al. [26] have proposed a framework based on the composition of cloud services in the multi-cloud platform. Their framework is divided into two phases. In the first phase, the combiner chooses a suitable combination of clouds from the multi-cloud database. In the second phase, they use the optimization algorithm, called intelligent water drops (IWD), to compose the services based on the QoS criteria. Finally, they have proved that their algorithm finds a solution in a reasonable amount of time compared to a similar algorithm. Moreover, the QoS criteria generated by their algorithm are close to optimal.
Dahan et al. [27] have introduced a hybrid algorithm by combining two metaheuristic algorithms, which are ant colony optimization (ACO) and genetic algorithm (GA), to compose the services of the cloud efficiently. The GA automatically tunes ACO's parameters, and its performance is adjusted based on the tuned parameters. The proposed algorithm helps the ACO algorithm to avoid stagnation problems and improve its performance. Xie et al. [28] have proposed an efficient two-phase approach to solve the reliability issue in the cloud service composition. They have integrated the k-means clustering technique and chaos gauss-based particle swarm optimization (CG-PSO) to improve the QoS and reduce the searching space to find the optimal service composition. The summary of the related work is given in Table 1.

Proposed model
This section presents our proposed CCS model using two MCDM algorithms, MEREC and extended SWARA. Here, we recommend the CSPs to build a unified group of services to fulfil the user's requirements. Note that the unified group of services contains the best values as the cloud user requires. The rationality behind using MEREC is that it is a recently developed objective weighting technique that is more efficient than other objective weighting techniques (i.e., CRITIC, entropy, etc.) [14]. Similarly, the rationality behind using extended SWARA is that it is a recently developed subjecting weighting technique that is quite diverse from other techniques (i.e., FARE, BWM, AHP, ANP, etc.) [15]. It uses the subjective opinion of the DMs and validates their opinion. Each criterion of the recommended composite service is previously evaluated using a set of alternatives to identify the best or highestscored alternative. As said earlier, our model integrates objective and subjective scores, using MEREC (objective weighting technique) and extended SWARA (subjective weighting technique), respectively, to calculate the score of the alternatives with respect to the criteria. Then it finds the best alternative that holds the highest score value among a set of alternatives. The objective is to create a composite service that contains the criteria value of the ranked one alternatives with respect to criteria. On the other hand, a set of DMs gives their valuable feedback after adopting certain CSP services. The feedback is basically in the form of ranks/scores for subjective opinion and quantitative values in the case of objective opinion. In the An efficient composite cloud service model using multi-criteria… proposed model, a cloud agent evaluates the criteria of the alternatives based on the opinions of the DMs, as shown in Fig. 1. Finally, the cloud agent recommends the CSP to build a composite service that holds the criteria values of the highest-scored alternatives. Figure 2 describes the schema of the proposed model. The step-by-step process of the proposed model is described in the following subsections.

Phase 1: input data
Phase 1 is divided into the following steps.
1. A set of DMs is asked to give feedback after adopting certain CSP services. Their feedbacks are in the form of rank/score, which acts as a subjective opinion. They also provide the objective criteria values of the alternatives. Let us consider An efficient composite cloud service model using multi-criteria… a set of m DMs, where q ≥ 2. These criteria include but are not limited to VM cost, availability, reliability, response time, security, scalability and usability [16]. We consider equal weights for the sake of implementation. The first four criteria are scalable and the remaining criteria are linguistic. The objective of the first and fourth criteria is to minimize and the objective of the remaining criteria is to maximize. The definition of each criterion is described as follows.
• Cost refers to the usage of CPU, network and storage per time unit. • Availability refers to the time that CSP resources are available to deliver services to the users. • Reliability refers to serving cloud services without failure under some conditions over a period of time. • Response time is the interval between requesting and getting a cloud service from CSP. • Security refers to enforcing policies to safeguard sensitive information. It is represented using linguistic values, such as very low, low, medium, high and very high, to indicate the level of security. • Scalability refers to the increase (or decrease) in resources to handle peak (or off-peak) loads. • Usability refers to the ease of using services by the users.

2.
A set of questionnaires for the DMs, which include a set of rank R = {1, 2, … , n} (rank 1 indicates the best), where n is the number of alternatives, and a list of five scores, 0, 1, 2, 3 and 4 that defines no influence, low influence, medium influence, high influence and very high influence, respectively, are given [13]. 3. Each DM, let say D k , is defined an objective decision matrix, ODM k = ( a k ji ), as follows.
In the matrix, ODM k = ( a k ji ), the elements are a k

11
, a k 12 , a k 13 and so on. Here by a k ji , we denote the value given by D k to the alternative A i according the criterion C j . The model given in [14] calculates the weights of the criteria by taking alternatives and criteria in rows and columns, respectively. In contrast, the proposed model calculates the scores of the alternatives by taking criteria and alternatives in rows and columns, respectively. Next, the values of all the DMs are averaged to form the overall matrix, ODM, which is defined as follows.
( 1) It is noteworthy to mention that these values are objective values like computer cost, storage cost, transfer cost, application cost, etc., as stated in [55]. Moreover, these values are considered different for different DMs in [55]. 4. There are q subjective decision matrices, SDM1 and m subjective decision matrices, SDM2, which are defined as follows.
In the matrix, SDM1 j = ( r j ki ), the elements are r j 11 , r j 12 , r j 13 and so on. Here by r j ki , we denote the rank given by D k to the alternative A i according to the criterion C j . In the matrix, SDM2 k = ( s k ij ), the elements are s k

11
, s k 12 , s k 13 and so on. Here by s k ij , we denote the score given by D k to the alternative A i according to the criterion C j in order to assign the comparative importance.

Phase 2: assigning objective scores to the alternatives using MEREC
Phase 2 is divided into the following steps.

Normalization
We use our input matrix as shown in Eq. (2). Suppose B denotes the benefit and NB denotes the non-benefit (cost) criteria. The normalized value n k ji is calculated using a simple linear normalization process as follows. (2) where a k ji is the objective value given by D k to the alternative A i according the criterion C j . 2. The overall performance of the criterion C j for D k is calculated from the normalized matrix. A logarithmic measure with equal alternative scores is used to acquire the overall performance of criteria [14]. It is mathematically expressed as follows.
where n is the number of alternatives and n k ji is the normalized value. 3. The performance of each criterion is calculated by removing one alternative at a time from the normalized decision matrix of D k , and it is defined as follows.
4. The removal effect (RE k ij ) of the alternative A i for D k with respect to criterion C j , based on the values from Eq. (5) and Eq. (6), is calculated and the summation of the absolute deviation is computed as follows. 5. The final score of the alternatives is determined. Each alternative's objective score is calculated using the removal effect RE k i . Let ow k ij be the score of the alternative A i by D k with respect to criterion C j . It is calculated as follows.

Phase 3: assigning subjective scores to the alternative using extended SWARA
There are two parts to the extended SWARA algorithm. First, the DMs assign a rank to the alternative, and the opinion of the DMs is validated [15]. Second, the DMs express the relative importance of the alternatives, and the subjective score of the alternatives is calculated.  where ARV j i is the average rank value of alternative A i . 5. The ranking sum average (RSA) of criterion C j is calculated as follows.
6. The total square ranking deviation (TRD) of criterion C j is calculated as follows.
7. The reliability of data or opinion given by the DM is expressed by calculating the coefficient concordance (COC) for each criterion. Mathematically, An efficient composite cloud service model using multi-criteria… The above formula is the same as Eq. 7 in [15] except for the denominator part. In [15], 1 (n−1) ∑ m k=1 T k is considered, whereas it is considered here as 0 in Eq. 16 as there is no reiterated rank index ( T k ). 8. The significance of the concordance coefficient ( 2 ) for each criterion is calculated as follows.
9. For testing the above hypothesis, the rank of table concordance ( 2 1 ) is calculated as follows [15]. 10. If the value of 2 > 2 1 , then the DM opinion is accepted. Otherwise, it is rejected.

Finding the subjective score of the alternative
1. Here, the input matrix is SDM2 k . We consider the average value of all the DMs opinions for the alternatives. The DM assigns the comparative score of each alternative for each criterion ( s k ij ), which is given according to the alternative rank. The alternative with rank one is assigned 0. The DM assigns the comparative score of alternative A i with respect to the previous alternative A i -1, and it continues till the last alternative. 2. The coefficient of each alternative for each criterion ( coe k ij ) is calculated as follows.
3. The recalculated score ( q k ij ) is calculated as follows: 4. The subjective score ( sw k ij ) of the alternative A i concerning each criterion C j is calculated as follows.

Phase 4: integrating the objective and the subjective score of the alternative
The final score of the alternative A i is calculated [13] as follows.
Note that the score of all the alternatives is calculated with respect to every criterion.

Phase 5: ranking the set of alternatives in each criteria
In this phase, the set of alternatives is ranked with respect to their score for each criterion. The highest-scored alternative is ranked 1.

Phase 6: a composite service recommendation
In this phase, the proposed model recommends constructing a composite service that contains the criteria value of the highest-scored alternatives. Then, the score of the alternatives is calculated by evaluating the criteria value. Therefore, it can be said that the alternative with the highest score has the best criteria value. Alternatively, the composite service has the criteria value of the ranked one alternatives from all the respective evaluations.

Case study
We consider a case study to implement our proposed model. We describe our case study in the following steps. First, we calculate the objective score of the alternatives using MEREC. It includes the following steps. Table 2 is given as input, which is the average objective criteria values for all the alternatives given by all the DMs. The normalized matrix is calculated and shown in Table 3 in which criteria C 1 is non-beneficial and other criteria are beneficial. 2. The scores ( S j ) are assigned with values 0, 1, 2, 3 and 4, which define no influence, low influence, medium influence, high influence and very high influence, respectively [13]. The average value of S j is shown in Table 4. Note that it is calculated from the normalized matrix. The calculation of S j for criteria C 1 is stated as follows.   3. The performance of each criterion ( S ′ ji ) is calculated from the normalized values and shown in Table 5. The calculation of S ′ 11 is stated as follows.  Table 6.
The calculation of RE 1 is stated as follows.
The objective score of the alternative is calculated and shown in Table 7. The value of ow 1 is calculated as follows. Next, we calculate the subjective score of the alternatives using extended SWARA. It includes the following steps.
1. The subjective score of five alternatives is calculated for six criteria. The ranks, R = {1, 2, 3, 4, 5}, are given in the form of subjective criteria values for sub- jective opinion. Note that the DM assigns a rank to the alternatives for each criterion. Table 8 shows the rank assigned to the alternative by the DMs for each criterion. The average rank value of the alternatives (ARV) is calculated and shown in Tables 9, 10 Table 9 The calculation of criteria C 1 Step Alternative Sum of ranks (SOR) 10 (i.e., 1 + 1 + 2 + 5 + 1) 12 19 17 17  4

Compatibility
The hypothesis of DMs ranking is accepted Table 11 The calculation of criteria C 3 Step Alternative , then the hypothesis about the ranking of the alternative is accepted, and the AR is assigned based on the ARV.
Next, the DMs give relative importance scores of each alternative for each criterion ( s ij ). Here, the rank one alternative is assigned with 0. In general, the alternative A i is assigned with the relative importance score based on the previous alternative A i Table 12 The calculation of criteria C 4 Step Alternative 4

Compatibility
The hypothesis of DMs ranking is accepted Table 13 The calculation of criteria C 5 Step Alternative 4

Compatibility
The hypothesis of DMs ranking is accepted. − 1. The average value for the alternatives for each criterion is shown in Table 15.
The detailed process is discussed in the following steps.
1. The coefficient value coe ij is calculated. For example, the rank of alternative A 3 for criteria C 1 is one as calculated using extended SWARA and the value of s 31 is 0. Therefore, coe 31 = 1. The s 41 value of alternative A 4 for criteria C 1 (i.e., rank two) is 0.1700. Therefore, coe 41 = 1 + 0.1700 = 1.1700. 2. Next, the recalculated score q ij is calculated. For example, the value of q 31 and q 41 for alternative A 3 and alternative A 4 with respect to criteria C 1 is calculated as follows.
3. The subjective score sw 31 for alternative A 3 with respect to criteria C 1 is calculated. For example, the value of sw 31 = 1 4.9607 = 0.2016.
Next, we integrate the objective and the subjective scores of the alternatives as shown in Table 16. We calculate the objective score of the alternatives using the objective weighting technique. On the contrary, we calculate the subjective score of the alternatives using the subjective weighting technique for the respective criteria. For example, the final score of the alternative A 1 is calculated as follows.  Table 14 The calculation of criteria C 6 Step Alternative 4

Compatibility
The hypothesis of DMs ranking is accepted An efficient composite cloud service model using multi-criteria… Table 15 Score of alternatives for each criterion after applying extended SWARA

Table 16
The combined score and the rank of the alternatives for the respective criteria An efficient composite cloud service model using multi-criteria… Now, we found the alternative with the highest final score value as it is the best alternative for the particular criteria. The summary is shown in Table 17. Finally, we recommend constructing a unified group of services containing the criteria value of the best alternatives. It is noteworthy to mention that the subjective score and objective score of alternatives are calculated based on the DM opinions to rank the alternatives. Note that these scores are calculated by considering the value of one alternative against the importance of other alternatives.

Results and discussion
We evaluated five alternatives with respect to six criteria, and the top-ranked alternatives were computed by integrating the objective and subjective scores as calculated by MEREC and extended SWARA, respectively. We used the cloud users' feedback, who had previously adopted the services of CSPs, for our evaluation. The rank of the alternative for each criterion is shown in Fig. 3. Next, we recommend constructing a composite service that includes all six criteria and holds the criteria value of the highest-scored alternatives. Figure 4 represents the new unified group of services with its six criteria, and each criterion contains the value assigned to the five highest-scored alternatives. Note that our proposed model recommends building a unified group of services offering the best criteria value that meets the user's requirements.

Comparison with existing models
We compare our proposed model with three existing cloud service selection models, namely AHP-TOPSIS, TOPSIS-VIKOR, and SWARA-VIKOR [16,19,20]. For this, we obtained the results of three existing models using the same dataset, and the results are shown in Tables 18, 19, 20, 21, 22 and 23 and Figs. 5, 6 and 7. AHP is used to calculate the weight, whereas TOPSIS is used to calculate the rank in Table 18. On the other hand, TOPSIS is used to calculate the weight, whereas VIKOR is used to calculate the rank in Table 20. Subsequently, Fig. 8 shows the comparison of the rank of alternatives for each criterion. The existing model partitions their models into two phases. First, they calculate the weights and/or scores using a MCDM technique, and then they rank the alternatives using another MCDM technique. Moreover, while comparing with the other models, the rank of the  alternatives is fully dependent on the alternative score calculated in the first phase, and there is no variation in the ranking of alternatives for each criterion. For example, when the TOPSIS-VIKOR model is used, alternative A 1 is the best alternative for all the criteria. Here, it is fully dependent on the scores assigned to the alternatives and the weights of the criteria. On the contrary, there is little variation when the AHP-TOPSIS model is used. Specifically, alternatives A 1 and A 5 are found as the best alternatives (i.e., alternative A 1 is best for criteria C 1 and alternative A 5 is best for criteria C 2 to C 6 ). Like the TOPSIS-VIKOR model, in the case of SWARA-VIKOR, alternative A 2 is the best alternative for all the criteria. However, in the case of the proposed model, we integrate both the scores of the subjective and objective criteria, and the rank does not entirely depend on any weighting techniques. For example, there is variance in the output for each criterion (i.e., alternative A 2 is best for criteria C 2 and C 4 , alternative A 3 is best for criteria C 3 , alternative A 1 is best for criteria C 1 , and alternative A 5 is best for criteria C 5 and C 6 ). Therefore, we conclude that our proposed model is performing better with respect to the other models.
In our model, we use both objective and subjective weighting techniques to find the combined score in order to find the rank of the alternative. However, the existing models use AHP, TOPSIS, and SWARA for subjective weighting techniques [29,31], and VIKOR and TOPSIS for ranking techniques [33]. However, they neither Table 19 The best alternative using AHP and TOPSIS Criteria Best alternative specify the type of criteria nor the weighting techniques. While using the subjective opinion of the DMs for subjective criteria, we validate their opinion using extended SWARA. However, the other models use the DMs opinion without validating the data. Finally, the majority of the existing works recommend the best CSP among a given set of CSPs by evaluating the criteria. However, we evaluate the criteria values of each criterion with a set of alternatives and recommend a composite service whose criteria values are selected from the best alternatives among the given set. Alternatively, we want to form a composite cloud service that holds the best criteria Table 21 The best alternative using TOPSIS and VIKOR Criteria Best alternative  Table 23 The best alternative using SWARA and VIKOR Criteria Best alternative An efficient composite cloud service model using multi-criteria… values. For this, we need to find the best alternative in each criterion. For instance, in real-world scenarios, alternative A 1 may outperform alternative A 2 based on one criterion and alternative A 2 may outperform alternative A 1 based on another criterion. Therefore, we need to form an alternative A 3 , which outperforms both criteria.

Sensitivity analysis
Sensitivity analysis is carried out to determine how the different independent variable values can affect the particular value of a dependent variable within a given set of assumptions. In other words, how do the output values get affected when the input variable values are changed? The model is sensitive if different input values constantly affect the output values. Otherwise, the model is robust. It also deals with ambiguity, uncertainty, and vagueness of several factors. As discussed below, our    Table 24. It is observed that the output value, which is the rank of the alternatives, remains constant for every interchange. 2. The second scenario is interchanging the objective values of the criteria. We represent C 1 -C 2 to indicate the values of objective criteria C 1 is interchanged with the values of objective criteria C 2 . The obtained result is shown in Table 25. It is seen that the output value does not get affected by any interchange. 3. In the third scenario, we are interchanging the objective score of the alternatives. ow 1 -ow 2 represents the values of the objective score of alternative A 1 with the objective score values of alternative A 2 . Similarly, we represent ow 2 -ow 3 and ow 4 -ow 5 . The obtained result is shown in Table 26. It is found that the rank of the alternatives is consistent with any interchange.
From the above three sensitivity analysis, it is seen that the rank of the alternative for each criterion does not get affected by interchanging the input variables. Therefore, our proposed algorithm is robust and not sensitive.
In the case of other models, we perform sensitivity analysis by creating a situation where the alternative scores are interchanged. Here, w 1 -w 2 denotes that Table 25 Sensitivity analysis: rank of the alternatives after interchanging the objective criteria values of the DMs 1  5  5  5  5  5  5  5  5  5  5  5  5  5  5  5  5  5 the score of alternative A 1 is interchanged with the score of alternative A 2 . The results are discussed as follows.
1. The result of the AHP-TOPSIS is shown in Table 27. It can be seen that by interchanging the alternative score, there is no alternation or deviation of the alternative rank. Therefore, we can say that the AHP-TOPSIS is robust. 2. The result of the TOPSIS-VIKOR is shown in Table 28. It can be observed that there is a deviation of the alternative rank on interchanging the alternative score. Therefore, we can say that the TOPSIS-VIKOR is not robust. 3. The result of the SWARA-VIKOR is shown in Table 29. It can be found that there is a deviation of the alternative rank on interchanging the alternative score. Therefore, we can say that the SWARA-VIKOR is not robust.

Conclusion
In this paper, we have considered the cloud service selection problem in which the cloud users have to evaluate a number of services provided by the CSPs to select a CSP that fulfils their requirements. As CSPs are offering services from a heterogeneous environment, choosing the best CSP for the users is a complex task. On the other hand, no CSP fulfils all the users' demands. Therefore, cloud services need Table 27 Sensitivity analysis: rank of the alternatives after interchanging the score values of the alternatives using AHP-TOPSIS w 1 -w 2 w 3 -w 4 w 4 -w 5 to be combined to form the optimal composition that maximizes users' needs. The proposed model is designed to construct a composite service that delivers the best criteria values to the users. The criteria are evaluated with a set of alternatives considering cloud users' objective and subjective responses. Then the objective and the subjective score of the alternatives for each criterion is calculated. Finally, the integrated score of the alternatives for each criterion is calculated and the criteria value of the highest-scored alternatives is selected for building the composite service. Our model would be helpful to a cloud agent to discover services for the cloud users and recommend a CSP to build the composite service concerning users' satisfaction. The real dataset can be used in future work to implement the proposed model and compared with existing models. The criteria weights are unequal in such a dataset, unlike the proposed model. Further, the evaluation of the proposed model may be done in a fuzzy environment that can deal with incomplete, contradictory, and subjective information. Alternatively, the proposed model can be validated using fuzzy logic (or fuzzy numbers) to show its feasibility.
Author Contributions MS contributed to conceptualization, data curation, methodology, and writingoriginal draft. SKP contributed to formal analysis, investigation, methodology, validation, and writingoriginal draft. SP contributed to methodology, visualization, and writing-review and editing. DT performed conceptualization, validation, and writing-review and editing.