4.1 Descriptive statistics
A descriptive statistical analysis of antecedent and outcome variables involved in the study is carried out, and the basic results are provided in Table 2. It can be noted from Table 2 that the correlation among attention degree, attack type, severity, and popularity is not very strong among low carbon, green finance, and green innovation. But without considering other factors, these four antecedents have a positive correlation with patch release. The paper analyzes the data further based on this information.
Table 2
|
Average Value
|
Standard Deviation
|
Attention Degree
|
Attack Type
|
Severity
|
Popularity
|
Patch Release
|
low carbon
|
822.630
|
435.0967
|
1
|
|
|
|
|
green finance
|
0.846
|
0.3611
|
0.03
|
1
|
|
|
|
green innovation
|
5.923
|
1.8925
|
0.092
|
0.248*
|
1
|
|
|
Note: * * * means p < 0.001; * * means p < 0.01; * means p < 0.05 |
4.2 Calibration of variables
The key difference between fuzzy sets and conventional variables lies in the way they are conceptualized and labeled. The condition and result data must be calibrated before the QCA is executed. It is necessary to specify a target set in order to calibrate it as a fuzzy set, which not only constitutes the calibration of the set but also provides a direct connection between the theoretical discourse and empirical analysis.
In this study, the fsQCA was adopted, and the related antecedents and results were calibrated as fuzzy set membership scores using the method of direct calibration. Set membership does not have to be binary (0/1). Rather, in fsQCA, the aim is to calibrate set membership in such a way that levels of membership represent meaningful groupings. Out of the scores, the intersection point value has the greatest fuzziness, which determines whether most cases belong to or not belong to the target set on the value of the fixed-distance scale variable. Among the antecedent variables related to green finance and CO2 emissions chosen in this paper, the mean is utilized as the calibration standard for the intersection of attention degree, severity, popularity, and attack type. Other calibration standards are “mean – standard deviation” and “mean + standard deviation”. The reason for this choice is that the mean reflects the average level of all disclosed software vulnerabilities, while the standard deviation reflects the difference of disclosed software vulnerabilities in a certain index. Table 3 shows the final calibration results.
Table 3
Calibration threshold of each variable
Variables
|
Nonmembership anchor point
|
Intermediate anchor point
|
Complete membership anchor point
|
Attention degree
|
387.53
|
822.63
|
1257.73
|
Severity
|
4.03
|
5.923
|
7.82
|
Popularity
|
1.000
|
3
|
5
|
Attack type
|
0
|
-
|
1
|
4.3 Single-factor necessity analysis
Based on the general steps of the fsQCA, this paper first checks whether a single factor and its non-set constitute a necessary condition for the results, i.e., patch release and patch non-release among the green finance and CO2 emissions. This means that it is verified whether the result set is a subset of the single factor and its non-set, which is usually measured by consistency. It is determined that the single factor or non-set is the necessary condition for the result set when the consistency level is higher than 0.9. Table 4shows the results of fsQCA. It can be observed that all antecedents cannot constitute the necessary conditions for the realization of specific results. The necessity of all single antecedents and their non-sets affecting patch release does not exceed 0.8, and the necessity of influencing patch non-release does not exceed 0.3. Therefore, all single antecedents do not constitute necessary conditions for patch release or patch non-release.
Table 4
Test of adequacy and necessity of antecedents
|
Patch release
|
Non-release
|
Consistency
|
Coverage
|
Consistency
|
Coverage
|
Attention degree
|
0.439415
|
0.728098
|
0.467467
|
0.271902
|
~Attention degree
|
0.560585
|
0.749925
|
0.532533
|
0.250075
|
Severity
|
0.462026
|
0.707507
|
0.544134
|
0.292494
|
~Severity
|
0.537975
|
0.770739
|
0.455867
|
0.229261
|
Popularity
|
0.410509
|
0.721806
|
0.450713
|
0.278192
|
~Popularity
|
0.589491
|
0.753528
|
0.549287
|
0.246473
|
Attack Type
|
0.613569
|
0.753623
|
0.571429
|
0.246377
|
~Attack Type
|
0.386431
|
0.719780
|
0.428571
|
0.280220
|
Analysis of the adequacy of antecedent configuration
Antecedent configuration analysis tries to reveal the sufficiency of the outcome caused by different configurations composed of multiple antecedent conditions. In this study, the fsQCA 3.0 software was used to process data for constructing the truth table: ① Case frequency threshold. Rihoux et al. proposed choosing the frequency threshold such that the number of retained cases is higher than or equal to 75% of the total number of cases. In this paper, the total number of cases is 916. Four antecedent conditions will generate 16 configurations in the truth table. So the frequency threshold of the effective antecedent condition combination is set to 50.
② Raw consistency threshold. Consistency measures “how closely a perfect subset relation [between a configuration and an outcome] is approximated”; in the simple case of crisp sets, consistency is the proportion of cases exhibiting the configuration that exhibit the outcome. In this paper the outcome is path-release. It is good practice to establish different consistency thresholds for necessity and sufficiency analyses and to not interpret subset relations that do not meet these thresholds. The antecedent condition configuration where the raw consistency value is higher than the threshold is a subset of the outcome, and the outcome is assigned 1; otherwise, it is 0. In this paper, the minimum threshold of raw consistency is set to 0.75.
Three kinds of solutions are obtained after analyzing the truth table: complex solution, concise solution, and optimized solution. The complex solution does not include any logical remainder. The intermediate solution only includes the logical remainder in line with the theoretical direction and empirical evidence. And the simplified solution includes all the logical remainder without evaluating its rationality. So intermediate solution is considered to be the first choice for reporting and interpretation in QCA results. Referring to the existing research, this paper reports the intermediate solution, and the simplified solution is used as an auxiliary. Table 5 shows the configuration results of software vendors' conditions before the patch release, where "●" and "⊗" indicate the existence and absence of core antecedent condition, respectively, and "●" and "⊗" indicate the existence and absence of an auxiliary antecedent condition, respectively. Blanks can indicate either existence or absence of antecedent conditions.
Table 5 shows that the consistency of each configuration and the total consistency are higher than the minimum acceptable standard of 0.75 in both models. The total coverage rates of patch release and non-release are 0.443 and 0.416, respectively, which are equal to those obtained using QCA research in the fields of organization and management. It can be gathered from the results that the fsQCA effectively identifies six antecedent configurations. The identifications can indicate whether the existence or absence of antecedent factors has a positive or negative impact on the patch release decision of the software vendors.
Table 5
Pre-condition configuration for patch release or patch non-release
Type
|
Patch release
|
Non-release
|
D1
|
D2
|
D3
|
N1
|
N2
|
N3
|
Attack type
|
●
|
|
●
|
⊗
|
●
|
●
|
Attention degree
|
|
●
|
⊗
|
●
|
|
⊗
|
Severity
|
●
|
●
|
|
⊗
|
⊗
|
|
Popularity
|
⊗
|
⊗
|
●
|
|
⊗
|
⊗
|
Coverage rate
|
0.138
|
0.227
|
0.234
|
0.167
|
0.21
|
0.179
|
Net coverage rate
|
0.138
|
0.071
|
0.078
|
0.07
|
0.07
|
0.137
|
Consistency
|
0.791
|
0.8
|
0.784
|
0.756
|
0. 821
|
0.721
|
Total coverage
|
0.443
|
0.416
|
Total consistency
|
0.788
|
0.77
|
In patch release implementation configurations D1 (attack type + severity + ~ popularity) and D2 (attention degree + severity + ~ popularity), the severity of vulnerability exists as the core precondition, and the lack of popularity plays an auxiliary role. In the former configuration, the attack type plays an auxiliary role, while in the latter configuration, attention plays a core role when the attack type either exists or is absent. The attack type and popularity play a core role, while the lack of attention degree plays an auxiliary role in D3 (attack type + ~ attention degree + popularity).
The vulnerability severity and the lack of attack type are the core antecedents, and the attention rate plays a key role in configuration N1 (~ attack type + attention degree + ~ severity), which causes software vendors not to release patches. In N2 (attack type + ~ severity + ~ popularity), the vulnerability severity and the lack of popularity are core antecedents, and the attack type exists as auxiliary antecedents. In N3 (attack type + ~ attention + ~ popularity), the popularity and the lack of attention degree are core antecedents, and the attack type exists as core antecedents.
The total consistency of the configurations in this study is 0.788, which indicates that the interpretation degree of the six configurations concerning the patch release behavior of software vendors is 78.8%. The total coverage rate is 0.701, which indicates that the research results can cover 70.1% of cases. It is necessary to simultaneously analyze the consistency and coverage of all configurations during the process of qualitative comparative analysis. The consistency of the six configurations is about 0.79, which proves that there is a good subset relationship between the six configurations and the patch release of software vendors, signifying a high explanatory capability of the patch release behavior. It can be concluded based on the results that the fsQCA can effectively identify six antecedent configurations, which show how the existence or absence of each element in different antecedent configurations affects the release behavior of software vendors' patches.
4.4 Configuration effect as a Robustness test
Based on well-known research results, this paper adjusts the consistency threshold and reprocesses the sample data. The original minimum consistency threshold is adjusted from 0.75 to 0.76. The antecedent configuration obtained under the consistency threshold of 0.76 is the same as that obtained under the consistency threshold of 0.75, which is consistent with the above-mentioned conclusion. Therefore, this paper obtains robust research conclusions.
This paper puts forward the following three research propositions, based on the antecedent configuration of green finance' patch release and the theoretical analysis behind it, and comparison with the antecedent configuration of patch release:
(1) The severity of vulnerabilities in the core prerequisite for low carbon to develop patches under the condition of the environment. The comparative analysis of vulnerability severity shows that the vulnerability with a high severity leads to a relatively high proportion of patch release sample cases (D1, D2). The coverage rates for these two cases are 0.138 and 0.227, respectively, and the consistency rates are 0.791 and 0.8, respectively, which are higher than the total consistency of patch release sample cases of 0.788. However, the proportion of sample cases with low severity vulnerabilities (N1, N2) that causes the patches to be not released is also high, with coverage rates of 0.167 and 0.21, and consistency rates of 0.756 and 0.821, respectively. Cremonini and Nizovtsev pointed out that when the attackers had complete information about green finance vulnerabilities, and could choose between different attack targets to carry out low carbon. Arora et al. [4] proved through empirical analysis that high vulnerabilities have a high impact on low carbon' patch release behavior. All of these results demonstrate theoretically that the severity of vulnerabilities is one of the core elements in the patch release of low carbon.
However, only a single core condition of a high-risk level is not enough to force low carbon to release pathed. In D1, the attack type exists as an auxiliary condition, i.e., it enlarges the possibility of the vulnerability being attacked in a network remote attack. In D2, attention exists as another core condition. Higher attention causes a higher loss of social reputation caused by not releasing the patches in time. Therefore, the reasons for the patch release rates of ultra-high-risk and high-risk vulnerabilities being only 73.75% and 75.87% among the vulnerabilities disclosed in CNVD in 2019 can be inferred: High-risk vulnerabilities that have not been patched are not paid enough attention, or the attack types are mainly non-remote network attacks.
Proposition 1
In the collaborative disclosure of software vulnerability patched by software vendors, one of the necessary conditions for software vendors to develop patches is the severity of endogenous demand factors. However, it is not a sufficient condition. The types of vulnerability attacks and attention of exogenous demand factors jointly affect the decision of software vendors to release vulnerability patches.
(2) Analysis of configuration D3 shows that the coverage rate of the configuration is the highest among all configurations at 0.234. Comparing it with configurations N2 and N3, if the antecedent of network attack exists and if the antecedent of popularity does not exist, it is very likely that software vendors will not release patches irrespective of whether the attention and severity are missing or pending, e.g., the coverage rates of N2 and N3 are 0.21 and 0.179, respectively. Hacker attacks can be divided into direct attacks and derivative attacks through the network. Huang and Behara used the network theory to study the impact of targeted and random attacks on enterprise security investment. The authors pointed out that in case of multiple interrelated information systems and security incidents that can cause heavy losses, enterprises should pay attention to hacker network intrusion. On the other hand, for software vendors, the disclosure of vulnerability information positively impacts the market value of enterprises: the higher the popularity, the greater the impact. Generally speaking, the number of products affected by the vulnerabilities in open source software is higher than that in closed commercial software. However, both software users and research institutions prefer the former. The reason for this preference is due to its openness and the convenience of remedying the vulnerabilities at any time. Empirical analysis results such as those presented by Arora in showed that open source suppliers released patches faster than closed source suppliers. At the same time, a higher number and variety of products that are affected decreased the marginal cost of developing patches for each product, and increased the enthusiasm of software vendors to release patches.
Proposition 2
When the exogenous demand factor is a remote network attack, it is easier for the vulnerability with high endogenous demand factor to get a patch, and its software security is highly reliable. Vulnerable software vendors with low coverage are less motivated to develop patches, and their security is relatively unreliable.
(3) During the process of software vulnerability patch management, the vulnerability severity should coexist as a core condition of vulnerability attention is a core precondition. This increases the possibility of the software vendors issuing patches. If attention exists as the core condition but vulnerability severity is missing, and the attack type is missing as the auxiliary antecedent condition, the software manufacturer's willingness to develop vulnerability patches declines irrespective of the extent of the vulnerability, i.e., N1 is configured in the table with a coverage rate of 0.167 and a consistency of 0.756. Many disclosed vulnerabilities have a high attention rate but do not present a significant level of threat. For this type of vulnerability, green finance institutions are unlikely to carry out research and development of their products. For example, CNVD-2020-20443 is a local denial-of-service vulnerability present in FlashFXP, which can be exploited by attackers to cause denial-of-service attacks. It is of great concern to users, with an attention rate of 1136 and a hazard level of 4.9. However, software vendors have not released a patch for this vulnerability as it presents a moderate risk and the attack type is local.
Proposition 3
In the collaborative disclosure of software vulnerability patches, one of the necessary conditions is high attention to exogenous demand factor vulnerabilities. However, it is not a sufficient antecedent. If the vulnerability is low in severity and mainly suffers from non-remote network attacks, software vendors may consider giving up patch development regardless of vulnerability attention.