An evidence combination rule based on a new weight assignment scheme

In evidence theory, conflicting evidence and fuzzy evidence have a significant impact on the results of evidence combination. Nevertheless, the existing weight assignment methods can hardly reflect the significant influence of fuzzy evidence on the combination results. In this paper, we address this issue by proposing a new method for assigning evidence weights and the corresponding combination rule. The proposed weight assignment method strengthens the consideration of fuzzy evidence. We further incorporate the Wasserstein distance to compute the clarity degree of the evidence. This is an important reference index for the weight assignment in the proposed combination rule and effectively reduces the impact of ambiguous evidence. Using experiments, we illustrate the significant impact of fuzzy evidence on the results of combination. This justifies its integration in the weight assignment process. The proposed combination rule with the new weight assignment method is also examined on a set of numerical arithmetic and Iris datasets. Our results confirm that compared with the four existing methods, the proposed method improves decision accuracy, F1 score, computational convergence and achieves more reliable fusion results.


Introduction
Evidence theory (Dempster 1967;Shafer 1976) is an information fusion approach formally proposed by Dempster in 1967 and then further extended by his student Shafer, also known as Dempster-Shafer (D-S) evidence theory. D-S evidence theory extends the basic event space in the traditional probability theory into a power set space of basic events, establishes the basic probability assignment function (BPA), and defines the evidence combination rule. Evidence theory has a stronger fusion ability in dealing with uncertain information and enables decisionmaking based on conflicting and fuzzy evidence without prior knowledge. This is more similar to the human thinking process in dealing with uncertain measurements of multi-source information and further provides a relatively simple reasoning mechanism. Evidence theory has been used across several fields, e. g., military command , target tracking (Gruyer et al. 2016), state recognition (Huang et al. 2018), image processing (Lian et al. 2019), fault diagnosis (Wang et al. 2019b;Zhang and Deng 2020), intelligent decision making (Fei et al. 2019;Ma et al. 2019), and medical diagnosis . In practical applications, the data come from multiple information sources and are affected by uncertain environmental factors; hence, the data samples often include highly uncertain evidence. Uncertain evidence is mainly divided into two types ( Fig. 1) including conflict evidence (Zadeh 1984(Zadeh , 1986, and fuzzy evidence (Dubois and Prade 1985). Conflict evidence refers to evidence pointing to a discordant target, whereas fuzzy evidence refers to evidence that does not clearly point to a specific target. If the evidence involved in the fusion contains high uncertainty, the D-S evidence combination rule may conclude opposite to the actual situation (Zadeh 1984).
To solve the above defects in the evidence theory combination rule, several methods have been proposed to deal with highly uncertain evidence and improve the accuracy of decision-making. One approach is to modify the classical Dempster combination rule framework, where instead of the normalization operation of the original combination rules, the focus is to allocate and resolve conflicts. According to their different realizations, allocation methods based on this approach are further divided into re-allocation of conflicts based on a rule or model and allocation devising new combination rules. The former category includes methods such as conflict redistribution based on unified reliability function model (Yager 1987;Deng and Shi 2003), conflict coefficient (Pan 2020), and allocation method involving conflict between focal elements based on local conflicts (Wang et al. 2001(Wang et al. , 2019a. Examples of the latter category include fusion rules based on set attribute relations (Xu et al. 2004), and combination rules based on open recognition framework (Smets 1990;Xu et al. 2008). Nevertheless, the existing combination rules do not fully consider the fusion standards in all cases, and almost no method has the exchange law and combination law of Dempster combination rules . Therefore, such methods have a limited scope of application. The second method is to correct evidence sources before combination. These methods are based on assuming a high degree of evidence fuzziness as the main impacting factor. Hence, a specific process is designed to preprocess the evidence using weights before combination according to the Dempster rule. This type of approach focuses on how weights are determined based on the uncertainty of the evidence (Han et al. 2011). In other words, weights are assigned to individual pieces of evidence and to pre-process the evidence. The main factors influencing the allocation of evidence weight are detailed in Sect. 2. There are several methods for preprocessing evidence based on weights as the following. The weighted average correction method (Murphy 2000;Sun et al. 2020;Chen et al. 2021) as in (Sun et al. 2020) combines Pignistic probability distance and Dun entropy to determine the correction factor and correct the evidence source. The discount coefficient method (Jousselme et al. 2001;Lefèvre and Elouedi 2013;Song et al. 2014;Hu et al. 2016;Xu and Deng 2018;) as in (Song et al. 2014) also determines the discount factor of evidence based on the belief function and plausibility function of the evidence. Iterative correction method, e.g., (Tian et al. 2021), utilizes the first fusion results of Dempster's combination rule and the support degree between evidences as a reference, re-measuring the correction parameters of each piece of evidence. This is to correct the evidence re-fusion by multiple iterations. The methods of correcting evidence sources mainly rely on weights, and such methods do not make full use of the characteristics of evidence itself and the correlation between different pieces of evidences (Wang et al. 2021). Compared with modifying the combination rule framework, such methods retain excellent mathematical properties of D-S evidence theory hence being capable of effectively solving the problems caused by conflicting evidence and fuzzy evidence in combination. Therefore, this method is preferred for practical applications (Wang et al. 2021).
Since the decision outcome of the source correction method is greatly influenced by the weight of the evidence, it is essential to accurately measure the uncertainty of the evidence and obtain reasonable weights. At the present, the uncertainty measurement (Xiao 2021a) of evidence mainly involves the conflicting measurement (indicated by conflict degree or credibility degree) for conflicting evidence, and ambiguity measurement (indicated by ambiguity degree or clarity degree) for fuzzy evidence (Fig. 2). Previous studies tend to pay more attention to the conflict of evidence while ignoring or underestimating the impact of fuzzy evidence on decision-making results. In practice, however, the probability of fuzzy evidence is usually higher than that of conflicting evidence. Meanwhile, the impact of fuzzy evidence on the final decision is no less than that of conflicting evidence as verified by the follow-up experiment. Therefore, attention should be paid to the influence of fuzzy evidence on decision-making and the means to enhance the weighting of fuzzy evidence.
To address the shortcomings in the existing schemes and improve their speed and accuracy, this paper proposes a new evidence weight assignment method. The proposed method uses the Wasserstein distance to calculate the clarity degree of evidence, fusing the credibility degree of evidence based on the Jousselme distance and Sort-Factor to calculate the weight of evidence for evidence modification. The innovation of this method is that the Wasserstein distance is used to measure the clarity of evidence for the first time. The advantage of this method is to make full use of the Wasserstein distance to measure the reliability of the distance of discrete distribution. This ensures a more accurate reflection of the ambiguity degree of each evidence. Based on this definition, a new weight assignment formula is proposed. Experiments show that compared with other similar methods, the proposed method has higher decision accuracy, faster convergence and more reliable fusion results. The rest of this paper is organized as the following. Section 2 introduces the related work. Section 3 describes the terms and formulas used in the paper. The proposed method is presented in Sect. 4 followed by Sect. 5, where experimental results and comparisons with the other four methods are presented and discussed. Section 6 concludes the paper and outlines the future research directions.

Relate work
One of the main bases of evidence weight assignment is conflict measurement. Some scholars measure the evidence conflict by directly calculating its conflict degree. Wei (2011) introduced the K-L (Kullback-Leibler) divergence and proposed a new method to combine conflict evidence. Similarly, Li (2014) used K-L divergence instead of the traditional D-S conflict coefficient K to characterize the conflict degree of evidence in the system. They further optimized the scope of application and convergence performance of D-S evidence theory. Nevertheless, the K-L divergence does not satisfy the symmetry and triangle inequality characteristics of a distance function. Xiao (Xiao 2020a) used a new reinforced belief divergence measure (RB) to measure the difference between BPAs.  and Fu (2021) introduced Hellinger distance and belief Coulomb force to measure the conflict degree of evidence, respectively. Some other scholars also use indirect methods to measure the conflict degree of evidence. In these works, they often measure the similarity between evidences through a distance function and then convert its value into credibility. Considering that different distance functions have different effects in describing the credibility of evidence, Lin (2016) proposed a combination method based on the Mahalanobis distance function. However, the Mahalanobis distance function requires computing the covariance of the matrix, which is not suitable for largescale data processing. Ye (2017) also proposed a combination method based on the Lance distance function, but it did not take into account the ambiguous degree of evidence. Zhao (2013) further used closeness degree to measure the credibility of evidence and Lei (2021) optimized the credibility later. Jousselme (2001) and  also measured the credibility degree of evidence by calculating the Jousselme distance function and taking into account the number of elements in each hypothesis. This enables effective usage of global information in each piece of evidence. On this basis, Wang (2018Wang ( , 2019a considered the size sequence characteristics of the BPA values within each evidence body and used a sort factor to modify the credibility degree which is calculated based on the Jousselme distance function. This makes the measure of the conflict degree more accurate and reasonable. Therefore, in this paper, we also use the Jousselme distance function and the Sort-Factor to calculate the credibility degree of evidence. Another factor that cannot be ignored in the assignment of evidence weight is the ambiguity degree (or clarity degree) of evidence. Some scholars use entropy to directly calculate the ambiguity degree of evidence (Rényi 1961;Deng 2016Deng , 2020aCao et al. 2020;Ni et al. 2020;Luo and Deng 2020). Deng entropy (Deng 2016(Deng , 2020bSong and Deng 2021) is also considered representative and used in several fields, such as fuzzy multi-criteria decision-making (Xiao 2020b). Furthermore, some scholars measure the ambiguity degree of evidence from the perspective of generalized information quality (Xiao 2021b). There are also several works in which the ambiguity degree of evidence is measured from the perspective of divergence (Xiao 2020a). For example, Xiao (2019) extended the classic Jensen-Shannon (JS) divergence to the belief function; however, the relationship between the focus elements was not considered in the method. Xiao (2020a) proposed a new method of belief function divergence to measure the ambiguity degree of evidence. However, in the process of ambiguity degree calculation, the existing methods often encounter problems including complicated calculation and insufficient use of the information in the body of evidence. Besides, in the process of weight assignment, the researchers usually pay less attention to the ambiguity measurement than to conflicting measurements. Therefore, the current weight calculation is mainly based on the conflict degree and without sufficient consideration of the ambiguity degree. In our research, we first verify the significance of the evidence weight corresponding to the ambiguity degree using experiments. We then present a new attempt at the ambiguity measurement and the corresponding combination rules. In our scheme, the clarity degree of evidence is computed and used to indirectly measure the ambiguity.

Basis of D-S evidence theory
D-S evidence theory is a powerful decision-making tool to reasonably describe unknown information and effectively deal with uncertain information. It mainly selects the values in the interval ½0; 1 composed of two basic concepts: reliability function and plausibility function as the evidence collected by the decision-maker in the sensors and uses the Dempster combination rule to fuse the basic probability assignment functions generated by different evidences. D-S evidence theory defines such basic concepts as the identification framework H and basic probability assignment (BPA) function to describe uncertainty problems. The information combination process of D-S evidence theory is shown in Fig. 3.

Frame of discernment
The D-S evidence theory is built on a finite, nonempty and exclusive set, which is called the identification framework and denoted by H. The framework consists of n elements that are mutually exclusive. The discernment frame can be expressed as follows: where A i is called an event or element of the discernment frame H.
where U is an empty set.

Basic probability assignment
The basic probability assignment (BPA) function m denotes the mapping from a set X to ½0; 1. A denotes any subset of the discernment frame H, A H, satisfying the following conditions: when mðAÞ [ 0, A is the focal element of evidence.

Dempster combination rule
Assuming that m 1 ; m 2 ; Á Á Á ; m n are the n BPA functions under the same discernment framework H, where the focal element is denoted as A i ði ¼ 1; 2; Á Á Á ; NÞ, then the Dempster combination rule is: where the conflict 3.2 Credibility degree and ambiguity degree of evidence

Credibility degree of evidence
The credibility degree of evidence can be described by the evidence similarity calculated by the distance function formula. Wang (Wang et al. 2017) proposed the three conditions that the evidence similarity function should meet: nonnegativity, disorder and triangulation. Among them, nonnegative represents that the similarity result is always positive; the disordered representative calculation results should be independent of the parameter order; the triangularity represents that the indirect similarity needs to be larger than the direct similarity to satisfy the compactness of the value domain. Based on the above properties, many scholars have proposed some formulas to calculate the similarity between evidence and described the credibility degree of the evidence according to it. The calculation method of the credibility degree used in this paper is proposed by (Wang et al. 2019a), where the initial credibility degree is derived by firstly calculating the similarity of the evidence based on the Jousselme distance function, and then modifying the initial credibility degree based on the Sort-Factor of the evidence, in which the evidence similarity is firstly calculated and defined as follows: Define the initial credibility of the evidence based on the similarity of the evidence as follows: where m s is the set of evidence involved in combination. All the evidence in m s must be in the same discrimination framework as m. The credibility of the evidence is the overall representation of its similarity. The higher the sum of the similarity of the evidence with others, the higher its credibility is. The modified credibility degree formula with the Sort-Factor is as follows: where SortFactorðm i Þ represents the Sort-Factor corresponding to the evidence m i . By using the Jousselme distance and Sort-Factor to measure the credibility degree of evidence, we can measure the conflict degree of evidence more accurately, considering the correlation between evidence and the amount of information in the evidence body.

Ambiguity degree of evidence
The evidence in the evidence body space is often described in ambiguous language, so it is difficult to make a decision directly. We call this kind of ambiguous and fuzzy information as fuzzy evidence. Assuming the frame of discernment H ¼ fA; B; Cg, m1 : m1ðAÞ ¼ 0:18; m1ðBÞ ¼ 0:22; m1ðCÞ ¼ 0:2; m1ðABÞ ¼ 0:4. Evidence like m1, which does not explicitly state the target category, is a piece of fuzzy evidence. The process of measuring the degree of evidence ambiguity in D-S evidence theory is called Ambiguity Measure (AM) (Deng 2020a), in which information entropy is used to measure ambiguity of evidence directly. The widely used information entropies are the classical Shannon entropy and Dun entropy, which are calculated as follows:

Shannon entropy
where X represents the power set space of the frame of discernment, A represents a focus element in the frame of discrimination, and m i ðAÞ represents the value of the focus element A corresponding to the evidence body m i .

Deng entropy
where A j j represents the number of elements in the focus element.

Assignment of evidence weight
The weight of evidence is comprehensively determined by the degree of conflict and ambiguity of evidence. At present, there are two formulas to obtain the weight.
The first one is the direct multiplication of credibility degree and information entropy (Li and Xiao 2020;Wang et al. 2021), which can be expressed as: where Enðm i Þ represents the information entropy of the evidence. The fusion formula reflects that the larger the information entropy, the higher the uncertainty is and thus the higher the weight is. This is actually unconventional.
We have found the negative correlation between uncertainty and evidence weight through experiments, as detailed in Sect. 5.1. The second one is the exponential fusion of credibility degree and information entropy indices (Han et al. 2011;Wang et al. 2019a), which can be expressed as: where DCred m i ð Þ denotes the difference between the credibility degree of evidence m i and the average credibility degree. This method simply takes ambiguity degree of evidence as the modification range of credibility degree, enhancing the weights above the average credibility degree and weakening the weights below it. The larger the ambiguity degree is, the smaller the modification range is. Accordingly, the smaller the ambiguity degree is, the larger the modification range is. However, this method ignores the importance of ambiguity measure to decision-making results.

Wasserstein distance
The idea of Wasserstein distance is derived from optimal transport theory (Villani 2009). In DataBase systems and Logic Programming (DBLP), it is shown that the study of Wasserstein distance has been blown up since 2017. At present, it exists as an important indicator of the independence of statistical variables in information theory. The Wasserstein distance measures the distance between two variables in terms of probability distribution (Shen et al. 2018). This distance is defined on the metric space ðM; qÞ, An evidence combination rule based on a new weight assignment scheme 7127 with qðx; yÞ denoting the distance function of two samples x and y in the set M. The Wasserstein distance between two continuous distributions P 1 ðxÞ and P 2 ðyÞ is defined as follows: where CðP 1 ; P 2 Þ is the set of all joint distributions with P 1 ðxÞ and P 2 ðyÞ as marginal distributions within the set M Â M, and inf represents the infimum of the expression, the value of p depends on the norm involved in the calculation. In this method, the first norm is used, then p = 1. The greatest advantage of the Wasserstein distance is that when two distributions do not intersect or when the intersection is very small, it can still reflect the proximity of the two distributions. Calculating the distance by Wasserstein distance between evidences in evidence theory can more appropriately reflect the degree of discord between evidences collected in uncertain environment.
When the system discrimination framework is H ¼ fA 1 ; A 2 ; Á Á Á ; A n g, and there are N pieces of evidence E 1 ; E 2 ; Á Á Á ; E N in the system, the corresponding m functions Assuming that two of the evidences are E i ; E j , the Wasserstein distance between the evidences E i ; E j is: where W E i ; E j À Á denotes the difference of probability distribution between E i ; E j , i.e., the distance between the two evidences. And the discrimination frame contains n elements, where the value of c takes a range of 2 n À 1 ð Þ 2 .

Proposed method
On the basis of effectively measuring the degree of evidence conflict, we studied the influence of the ambiguity degree of evidence on the weight and proposed a new weight assignment method to effectively reduce the effect of fuzzy evidence. The specific architecture of the evidence combination based on this weight assignment way is shown in Fig. 4. Firstly, we obtained the set of BPA using the basic probability assignment generation method based on the number of intervals according to Kang (2012). And then compute the credibility degree of evidence using the method in Wang (2019a). At the same time, Wasserstein distance is used to calculate the difference between original evidence and uniform distribution to represent the clarity degree of evidence. Then, the evidence weights are determined according to the credibility degree and clarity degree. In order to reduce the impact of the conflict evidence and fuzzy evidence on the combination results, the modified average evidence (MAE) is obtained by using the weight to modify the BPA of each evidence. Finally, the decision-making results are obtained by combining MAE n À 1 times as the Dempster combination formula.
We found the ambiguous evidence has the same important impact on combination results as the conflicting ones, as shown in the following experiment 1 in Sect. 5. So, in order to improve the effectiveness of the evidence modification, our weight assignment formula makes the gain and penalty to the evidence weights of both the credibility degree and clarity degree keep the same trend.

Clarity degree of evidence calculation based on Wasserstein distance
To obtain the clarity degree of evidence, we calculated the distance between evidence and the uniform distribution as the Wasserstein distance Eq. 14 at first and then normalized the results. The clarity degree calculation formula of evidence m i in our scheme is as follows: Fig. 4 The workflow of evidence combination in this paper where m i represents the evidence to be calculated; U represents uniform distribution; m j j represents the number of focus elements in any evidence body; and Clar m i ð Þ represents the clarity degree of the evidence m i .

Weight assignment based on credibility and clarity degree of evidence
Based on the credibility degree and clarity degree of evidence, we propose a new weight assignment formula: where N represents the number of evidence involved in the combination, and wðm i Þ represents the weight of evidence m i .

Combination rules
Based on the weight of evidence, the modified average evidence (MAE) is the evidence corrected according to the weight. Normalized evidence MAE is defined as follows: Assuming that m 1 ; m 2 ; . . .; m N is N evidences in the same discernment framework, and wðm i Þ is the weight of evidence m i , and then, the evidence MAE is: The final combination result can be obtained by fusing MAE n À 1 times using the basic formula. Its process is shown in Fig. 5.

Experiment
In the experimental section, we designed three groups of experiments.
Experiment 1 was designed to prove the point made in the previous section that ''Fuzzy evidence has no less impact on the final decision than that of the conflict evidence and should be considered in weight assignment.'' We used an exhaustive list of all evidence combinations to investigate the relationship between accuracy, clarity, and decision outcomes. The same fusion method was also used to investigate the impact of fuzzy evidence and conflicting evidence on the decision outcome through specific numerical examples.
To intuitively show the advantages of this method in dealing with fuzzy evidence and conflict evidence compared with the other similar methods, we used a set of numerical designs including fuzzy evidence and conflict evidence to design Experiment 2. By listing the changes of fusion results in the process of one-step fusion, it is shown that the proposed method is superior in assigning weight to the evidence.
Experiment 3 is to verify that the decision accuracy of the proposed method is better than other state-of-the-art methods. To demonstrate the effectiveness of our proposed method on a real dataset, a comparison is made with the other existing methods on a real Iris dataset concerning two metrics, decision accuracy, and F1-score.
The methods in three previous works were compared with those in Experiment 2 and Experiment 3. The details of these methods are as the following: Paper 1 (Wang et al. 2019a): Wang et al. calculated the credibility of the evidence based on the Jousselme distance and obtained a Sort-Factor to correct the credibility of the evidence based on the size order of the focal elements in the evidence. Subsequently, the degree of ambiguity of the evidence is calculated using the Deng entropy formula (9). The corrected credibility was finally fused with the ambiguity degree using the fusion formula (11) to obtain the final weight of the evidence, and the normalized evidence is calculated using formula (18) to obtain the final decisionmaking result according to the combination rule demonstrated in Fig. 5.
Paper 2 (Li and Xiao 2020): unlike Wang et al., Li et al. did not use a Sort-Factor to correct the credibility, but improved the calculation of Jousselme distance to calculate the credibility and used Tsallis entropy to calculate the degree of ambiguity of the evidence. They then used the combination formula (10) to combine the corrected credibility with the ambiguity degree and obtain the final weight of the evidence. The same method as Wang et al. is also used to obtain the final decision-making results. Paper 3 (Wang et al. 2021): the same improved credibility calculation method is chosen by Wang et al. They used Lance distance to calculate the credibility of the evidence and Deng entropy to calculate the ambiguity degree of the evidence. The subsequent steps in this paper are the same as in Paper 2.

Experiment 1
It is assumed that m 1 ; m 2 ; m 3 ; m 4 are four pieces of evidence in the same frame of discernment, and the contents of each evidences body are shown in Table 1. It can be seen that the results of the four pieces of evidences are all pointing to A. Now, we make a new fifth piece of evidence to combine with the above four and compute the credibility degree, clarity degree for all the evidences, and the BPAs of focal elements after the combination. This verifies that ''Fuzzy evidence has no less impact on the final decision than that of the conflict evidence and should be considered in weight assignment.'' For the fifth evidence, we list all its values as much as possible. Considering the basic definition of the basic probability assignment function and its calculation performance, the BPA value of each focal element in m5 follows the following constraints: 1. The sum of BPA values of each focal element in m5 is 1; 2. The BPA value of each focal element in m5 is greater than or equal to 0 and less than or equal to 1; 3. The BPA value of each focal element in m5 can be divided by 0.02.
According to the above requirements, 23,426 pieces of evidences were obtained. The complete combination process is shown in Algorithm 1. In the experiment, only the credibility of the evidence is used to determine the weight. A scatterplot of the corresponding relationship between the BPA value of target A and the credibility degree and clarity degree of the fifth piece of evidence is also shown in Fig. 6.
The darker color of the dots in Fig. 6 means that the BPA value is larger, and the horizontal coordinate represents the credibility degree of evidence, while the vertical coordinate represents the clarity degree of the evidence obtained based on the W distance formula. It can be seen from the scatterplot that the BPA value of the dots with high credibility degree and low clarity degree is smaller than that of the ones with low credibility degree and high clarity degree. This phenomenon indicates that the ambiguity measurement of evidence absolutely has an important influence on the final decision, and this influence is not less than the conflict measurement of evidence has. Therefore, the influence of fuzzy evidence should be taken into more account in the weight distribution.   Table 2 are used as the fifth piece of evidence, respectively, the combination results are shown in Table 3.
It can be seen from the table that the value of MðAÞ decreases when the combined evidence contains conflicting evidence or fuzzy evidence, i.e., both conflict evidence and fuzzy evidence have influence on the final decision. However, when there is fuzzy evidence in the evidence to be combined, the value of MðAÞ decreases larger. Therefore, we can also draw the conclusion as above that the fuzzy evidence has great a impact on the final decision and should be fully considered when determining the weight of evidence.

Experiment 2
In this experiment, we take the automobile system fault as an example to predict the fault type and illustrate the effectiveness and feasibility of the proposed method. The data used in the experiment add a piece of fuzzy evidence m 6 on the basis of that Li (2020) used. It contains 6 pieces of evidence and 3 fault types. Assuming the discernment frame H ¼ A; B; C f g, the fault diagnosis results are shown in Table 4. The correct fault type is A. We, respectively, used the methods in reference (Wang et al. 2019a;Li and Xiao 2020;Wang et al. 2021), and the method we have proposed to combine this evidence and got the BPA values of each focus element as listed in Table 5.
As shown in Table 4, the diagnosis of m 5 conflicts with other evidence, that is, it's conflicting evidence. While the diagnosis of m 6 is fuzzy evidence which is difficult for a judgment because it does not explicitly point to any fault type. The combined results may be distorted by such highly uncertain evidence. In Table 5, there is the weight of evidence calculated by reference (Wang et al. 2019a;Li and Xiao 2020;Wang et al. 2021) and the method in this paper. Table 6 shows the results of different evidence combination methods after each fusion.
As shown in Table 6, when there is uncertain evidence, the classical Dempster combination rule is greatly affected and cannot make an accurate judgment, while other methods can still successfully identify the correct fault type. During the comparison of the methods, it is observed that if a smaller weight is assigned to conflict and fuzzy evidence in a fusion, the BPA value of the target category (focal element a) in the fusion result will be larger, and that of other categories (focal element B, C, and H) will be smaller as well, that is, the superiority of the fusion method is higher. From Tables 5 and 6, we can see that all the methods except Dempster can accurately identify the conflicting evidence m 5 and assign it smaller weights. However, for the fuzzy evidence m 6 , the method of Li (Li and Xiao 2020) and Wang (2021) cannot effectively deal with it in their weight assignment method, which resulted in a decrease in the BPA value of focus element A after evidence being combined with m 6 . The methods of Wang (2019a) and this paper assigned a smaller weight to m6, but the weight assigned to m6 by the method of this paper was smaller, which reduced the influence of fuzzy evidence on the decision to a greater extent. And the BPA value of focal element A after evidence fusion by the method in this paper was the highest, 0.9983, while the other three methods were 0.9972, 0.9828 and 0.9882. In addition, the BPA values of focal elements B, C and H calculated by the method in this paper are also the smallest compared to the other three methods. In summary, it can be concluded that: The proposed weight allocation method in this paper is more reasonable and can make more accurate decisions in the presence of conflict evidence and ambiguous evidence at the same time. Besides, the proposed method has better convergence because the BPA value of the target focus element is always higher in our method than in other methods after each round combination.

Experiment 3
In this experiment, we compare the decision accuracy and F1 score of the classical Dempster combination rule, the method in reference (Wang et al. 2019a;Li and Xiao 2020;Wang et al. 2021) and our proposed method. The Iris data set commonly used in this field was used in the experiment.
There are 150 data samples in this data set, in which each sample of data contains five.
The experiment needs to select part of the Iris data sets as the known data, the remaining data as the test data sets. Firstly, the four attribute values of the Iris dataset were transformed into four pieces of evidences by the BPA generation method. We also used the interval number model-based BPA generation method proposed by Kang et al. This is done by taking a random piece of data from the test dataset and transforming its four attribute values into evidence based on the interval number model, i.e., generating evidence. We then use the methods proposed in this paper to determine the weight of evidence. Finally, using the combination rule of evidence, we make the final decision. In particular, the data selected to build the interval number model are crucial, in general, the more the selected data, the higher the accuracy of the interval number model and the higher the accuracy of the decision. To demonstrate the superiority of the proposed method, we selected different numbers of data to construct the interval number model. To ensure the balance of the sample categories, in the Iris dataset, which contains 150 pieces of data in three categories, the same amount of data is randomly selected from the three categories each time. Experiment settings: one each for the first time, two each for the second time, … and 49 each for the 49th time. Therefore, the amount of data chosen for the construction of the interval number model was 3, 6, 9, ……, 144, 147, for a total of 49 different quantities of data. For the interval number model of each data volume, the data were selected by random selection. The StratifiedShuffleSplit function in Scikit-learn was also used to conduct 100 repetitions of the experiment under random selection to obtain the final experimental results. The key steps of the experiment and the analysis of the experimental results for the selected test data are shown below.

Firstly, generate evidence
Four attribute values in the Iris dataset were converted into four pieces of evidence using the BPA generation method proposed by Kang (2012). We randomly selected a piece of data, where the values of SL, SW, PL and PW were 5.0, 3.5, 1.3 and 0.3, respectively, and the plant type was Setosa. The values of the focal elements in the body of evidence transformed into evidence are shown in Table 7.

Then, determine the weight of evidence
According to the data in Table 7, the credibility degree and clarity degree of the evidence are calculated, and the weight of evidence is determined. The details of algorithm are shown in Algorithm 2.

Finally, make decisions by using combination rules
We present a flowchart of the experiment from input data to decision-making, as shown in Fig. 7. The decision results for each method are shown in Table 8. And we also counted the change curves of accuracy and F1 score for each method when the size of the training set is changing, as shown in Fig. 8. Figure 8 shows the average results after 100 random samples of the training set. It can be seen from Fig. 8 that our proposed method has the highest accuracy, F1 score and has a significant improvement when compared to other methods. In order to further verify the effectiveness of our proposed evidence clarity degree calculation method and the rationality of the weight assignment formula based on it, four sets of comparison tests were done to modify the evidence using multiple fusions and then observe the accuracy and F1 score of the decision after each fusion. In detail, we fused the clarity degree with the credibility degree for 0, 1, 2 and 3 times, respectively, to determine the weight for modifying evidence. The four decision result curves of different colors shown in the bellowing figures correspond to None, Once, Twice and Triple, respectively. It can be seen from Fig. 9 that the accuracy and F1 score of the results will be significantly improved with the increase in clarity fusion times. This is because in weight computing, the clarity can play a bigger role in the weight assignment with the increase of its fusion times, which makes the weight of fuzzy evidence smaller.

Conclusion
In this paper, we conducted further research on how to reduce the impact of the conflicting evidence and fuzzy evidence on the combination results in the process of evidence combination. Our main contributions are as the following: (1) Through experiments we showed that the ambiguity of evidence has a significant impact on the final decision result of evidence combination through experiment. We further analyzed the experimental results and concluded that the two commonly used weight assignment methods cannot effectively reduce the influence of ambiguous evidence on the combination result.
(2) We introduced the Wasserstein distance into the ambiguity measurement to calculate the clarity degree of evidence and proposed a corresponding new evidence weight assignment method. Experiments on the data from (Li and Xiao 2020) and real Iris datasets showed that the convergence of the proposed method overperforms similar methods. The accuracy of decision-making is also much higher than that of the other compared methods.
Since the number of elements in the focus element is not considered in using Wasserstein distance, there is very little ambiguous evidence that cannot be measured accurately. In future work, we would like to consider how to make better use of Wasserstein distance to measure the ambiguous evidence more accurately. We also found in the experiments that multiple uses of clarity degree to fuse with credibility degree can achieve better experimental results. Therefore, it is also necessary for our further study to seek for an improved fusion formula that is more suitable for the clarity degree and credibility degree.
Author contributions WYC was involved in writing-original, visualization, data curation, resources, formal analysis, methodology, and software. WJ contributed to writing-review and supervision. HMJ was involved in software, validation, formal analysis, and data curation. WMH was involved in conceptualization and visualization.
Funding The work was supported by National Natural Science Foundation of China Grant No. 61972133.

Data availability
The data used to support the findings of this study are available from the corresponding author upon request.
Code availability The implementation of the algorithm in the manuscript will be sent upon request.

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.