4.1. Demographics of the respondents
The sampled respondents represent the normal characteristics of the general Taiwanese population44, whose demographic structure shows a gender balance, broad age group, and high education level. The 1,200 respondents comprised 49.1% females and 50.9% males, most of whom (72.4%) were aged between 20 and 49 years, representing the young adult population. About half of the respondents (49.9%) held a bachelor's degree. Only a few respondents (1.25%) had a highest education level of elementary school, which might reflect 188 individuals having not ever heard of AI and therefore being excluded (see Section 3.1).
Regarding the use of and attitudes toward AI products, 44.3% of respondents reported that they use AI products in daily life. Meanwhile, regardless of having used AI products, 58.4% and 14.3% of the respondents reported that they had progressive and very progressive attitudes toward AI technology, respectively, while 25.1% and 2.3% had conservative and very conservative attitudes, respectively.
4.2. Respondent perceptions about ethical principles
The perceptions of the respondents were analyzed regarding the following four ethical principles: transparency, fairness, privacy, and nonmaleficence. The respondents answered that nonmaleficence should be considered the most (M = 3.16, SD = 0.92), followed by transparency (M = 2.83, SD = 0.86), privacy (M = 2.55, SD = 0.85), and fairness (M = 2.51, SD = 0.82) on the 4-point Likert scale.
4.3. Public opinions about accountability to different stakeholders
Echoing the principle of most concern to the public (nonmaleficence), the analyses then focused on identifying stakeholders that should be responsible for potential harms caused by AI. The respondents attributed accountability when accidents occur to different stakeholders depending on the scenario. Among five scenarios, given that self-driving cars and medical decision greatly considered to improve public safety, Table 1 lists how respondents with different attitudes toward AI technology judged accountability for stakeholders in two scenarios. In the self-driving-cars scenario, accountability was attributed to programmers (i.e., developers), whereas in the adverse-medical-events scenario, accountability was attributed to hospitals (i.e., industry).
The different trends in accountability may be attributed to the initial responsibility of the public in services. For example, the initial responsibility of car driving belongs to a driver (i.e., the AI user). If the user switches to self-driving mode, they transfer the responsibility from themselves to the car. In contrast, the initial responsibility belongs to hospitals (i.e., the industry) in the medical-decision scenario. Although the patients agreed with the hospital about using an AI medical system, they still attributed the responsibility to the hospital when adverse medical events occurred.
Table 1. Judgments of respondents with different attitudes toward AI technology on attributing accountability to different stakeholders in two scenarios (n = 1,200)
|
|
Stakeholders
|
Scenario
|
Attitudes
|
AI developers
|
Industry
|
AI users
|
AI product
|
Self-driving car
|
Conservative
|
170
|
26
|
42
|
90
|
Progressive
|
430
|
111
|
135
|
196
|
AI medical system
|
Conservative
|
65
|
112
|
19
|
132
|
Progressive
|
173
|
296
|
125
|
278
|
4.4. The effect of background characteristics on accountability
We next investigated whether the background characteristics of the respondents (e.g., attitudes toward AI, gender, age, and education level) influenced their opinions about accountability to different stakeholders.
4.4.1 Influence of attitudes toward AI technology on the public's opinions about accountability
To explore how the attitudes toward AI technology of the respondents influenced attributing accountability, we implemented a 4 (attitudes) × 4 (accountability) two-way ANOVA. The respondents with different attitudes toward AI technology (very progressive, progressive, conservative, and very conservative) were asked to express their opinions about attributing accountability to four stakeholders (AI developers, the industry, AI users, and the AI product). The main effect of accountability was significant, F(3, 3588) = 66.95, p < .01, η2 = 0.053, while that of attitudes was not, F(3, 3588) < 1, nor was the interaction, F(9, 3588) < 1. These results indicated that the respondents tended to attribute accountability more to AI developers (M = 0.43, SD = 0.29) and the industry (M = 0.41, SD = 0.30) than to AI users (M = 0.22, SD = 0.24) and the AI product(M = 0.21, SD = 0.24), regardless of their attitudes toward AI technology.
4.4.2 Influence of gender on the opinion of the public regarding accountability
A 2 (gender) × 4 (accountability) ANOVA was used to investigate the influence of gender on attributing accountability to different stakeholders. The main effect of accountability, F(3, 3594) = 256.88, p < .01, η2 = 0.18, had the same pattern as that in the above ANOVA for the attitudes × accountability interaction. The respondents tended to attribute accountability more to AI developers (M = 0.44, SD = 0.29) and the industry (M = 0.40, SD = 0.29) than to AI users (M = 0.20, SD = 0.24) and the AI product (M = 0.20, SD = 0.24). Meanwhile, the main effect of gender was significant, F(1, 1198) = 6.19, p = .01, η2 = 0.005, and scores were higher for females (M = 0.32, SD = 0.27) than for males (M = 0.29, SD = 0.27, p = .01).
The gender × accountability interaction was significant, F(3, 3594) = 4.72, p < .01, η2 = 0.004, suggesting that opinions about accountability varied by gender. Table 2 lists descriptive statistics of gender in attributing accountability to different stakeholders. Females tended to attribute the responsibility more to AI developers (M = 0.44, SD = 0.29) than to the industry (M = 0.39, SD = 0.29, p = .01), whereas there was no significant pattern for males(p = 1.00). No other significant pairwise comparison was found (all ps > .05). Females tended to emphasize accountability to AI developers more than the industry when compared with males.
Table 2. Means and standard deviations of accountability by gender (n = 1,200)
|
Stakeholder
|
Gender (sample size)
|
AI developers
|
Industry
|
AI users
|
AI product
|
Male
|
(589)
|
0.43 (0.29)
|
0.41 (0.31)
|
0.18 (0.23)
|
0.18 (0.23)
|
Female
|
(611)
|
0.44 (0.29)
|
0.39 (0.29)
|
0.23 (0.25)
|
0.23 (0.25)
|
4.4.3 Influences of age and education on the opinion of the public regarding accountability
We reported the results for age and education level together since elderly people often had lower education levels (e.g., elementary-school education only), and the middle-aged respondents often had higher education levels (e.g., bachelor's degree and above). Figure 1 illustrates the preferences of the respondents in attributing accountability to different stakeholders across ages. Figure 2 shows their opinions about attributing accountability according to education levels and stakeholders. Since 1 respondent refused to answer the education-level question, the sample size for this analysis was 1,199.
In exploring the influence of respondent ages on their opinions, the main effect of accountability was significant, F(3, 3582) = 110.32, p < .01, η2 = 0.085. The respondents clearly tended to attribute accountability in the following order: to AI developers (M = 0.44, SD =0.29), the industry (M = 0.38, SD = 0.30), the AI product (M = 0.21, SD = 0.24), and finally AI users (M = 0.19, SD = 0.24). Notably, those aged 70 years and older seemed to not attribute responsibility to AI users, a pattern dissimilar to that of other respondents, although the difference was not significant. The main effect of age, F(1, 1194) = 1.79, p = .11, and the interaction between age and stakeholder, F(15, 3582) = 1.40, p = .14, were not significant.
The significant interaction between education and accountability, F(18, 3576) = 1.67, p = .038, η2 = 0.008, indicated that opinions varied according to education level. The respondents who had received elementary-school education as their highest level clearly attributed accountability to the AI developers (M = 0.55, SD = 0.21), and did not think that users should take any responsibility (M = 0.00, SD = 0.00). The respondents who had been to a vocational college (and above) as their highest level attributed accountability to both the AI developers (M = 0.43, SD = 0.29) and the industry (M = 0.40, SD = 0.30). It was particularly interesting that those who received master’s and doctorate degrees attributed accountability more to the industry (M = 0.44, SD = 0.30) than to AI developers (M = 0.40, SD = 0.29).
These differences in accountability with education level may reflect the impact of education on the perspectives of the individuals. Those with higher education may be more aware that the industry covers broader responsibilities. For example, the duty of the industry includes—but is not limited to—quality control, product marketing, and after-sales services. For the sales end, market considerations are often the driving force for developing an AI product, and thus influence the R&D of the product. Thus, those respondents with higher education levels tend to hold the industry more responsibilities than do other stakeholders.
4.5. Attitudes toward AI regulation implementation
Following from the accountability analyses, the AI regulation implementation was investigated, which is highly related to management responsibility. Table 3 lists the preferences of respondents who held progressive or conservative attitudes toward AI technology in three contexts of AI regulation implementation.
Table 3. Respondents with different attitudes toward AI technology and their preferences in different contexts (n = 1,200)
|
|
Very conservative
|
|
|
Very progressive
|
Context
|
Attitudes
|
1
|
2
|
3
|
4
|
Regulating AI technology settings
|
Conservative
|
190
|
122
|
12
|
4
|
Progressive
|
370
|
371
|
114
|
17
|
Forbidding strong AI
|
Conservative
|
135
|
142
|
39
|
12
|
Progressive
|
281
|
362
|
196
|
33
|
Controlling AI weapon development for national defense
|
Conservative
|
180
|
120
|
25
|
3
|
Progressive
|
344
|
348
|
155
|
25
|
The results indicated that the respondents tended to value nonmaleficence regardless of their attitudes and the context, a pattern that was consistent with their perceptions about ethical principles (4.2). Across the contexts, the respondents therefore supported that strict standards should be adopted when establishing regulations aimed at AI technologies. However, in the context of developing AI weapons for national defense, the number of respondents who supported flexible standards increased, which could be attributed to the consideration that the developing national defense weapons is necessary for a country to survive, so the respondents adapted themselves to flexible regulations. Moreover, although film and television entertainment frequently highlight the risk of strong AI, such as in war, catastrophes, or ethical collapse caused by its development, the respondents were more progressive toward AI technology regulation.
4.6. Tendency toward the accountability of developing AI regulations
While the respondents tended to want strict standards for AI technologies, they preferred this to be framed by citizens and legislature the most, followed by technology experts, and lastly by industrial autonomy. Table 4 lists the tendency of the respondents toward the accountability in developing AI regulations. The development of international AI norms is mostly based on academic research by experts or independent industrial norms, but also on the adjustment of laws and regulations of the existing liability principles by governments worldwide. However, there is no precedent for citizen deliberation. There is therefore a gap between the governance model expected by the respondents and the current practice in various countries. Therefore, when developing future regulations related to AI technology, it is necessary to incorporate public participation or citizen deliberation.
Table 4. Tendency of respondents toward the accountability of developing AI regulations (n = 1,200)
|
Stakeholder
|
Attitudes
(sample size)
|
Citizen engagement
|
Legislature
|
Technology experts
|
Industrial autonomy (enterprise)
|
Conservative (328)
|
146
|
139
|
31
|
12
|
Progressive (872)
|
351
|
348
|
125
|
48
|