Widespread use of the internet by learners of all ages has democratized the development and accessibility of educational materials1. The COVID-19 pandemic further solidified digital communication as a primary medium for information exchange among educators and learners2. Indeed, K-12 students are digital natives who use various online platforms such as YouTube to complete academic assignments3, with varying degrees of judgement for the reliability of the sources4. The amount of time children eight years old or less spent on YouTube has doubled between 2017 and 2020, portending an increased reliance on online media for even the youngest learners5. Social media and other online information sources have also facilitated lifelong education for continued professional and personal development6. However, the proliferation of misinformation on social media and other platforms has raised the risk of exposure to deliberately misleading educational content7,8.
The consequences of exposure to online misinformation range in severity and scale, often depending on context9. While many types of misinformation exist outside of the mainstream10, misinformation on science, technology, engineering, and math (STEM) related topics such as climate change and vaccines have had major public policy repercussions11–14. Social media and modern communication methods facilitate the rapid dissemination of misinformation, amplifying these impacts15, 16. Misinformation campaigns tend to rely on undermining the consensus, highlighting uncertainty, undermining the credibility of leading figures and institutions, and disseminating pseudoscientific alternatives17.
The availability of open-source artificial intelligence (AI) algorithms has significantly lowered the barriers to altering videos and images in order to produce highly realistic, manipulated digital content (e.g., deepfakes)18–20. Generative neural networks (GNNs) are a class of deep neural network models that represent the state-of-the-art technique that can be leveraged to democratize the mass synthesis of manipulated digital content21, 22. The have been used to fabricate images by training them to encode human features23, to manipulate images via replacing specific components of a digital image or video24, and to create videos via animation of a still image with the characteristics of a source video25.
In this study, we investigate the vulnerabilities of K-12 students, higher education students, teachers, principals, and general adult learners to deepfakes related to climate change and investigate potential population and video characteristics that can be leveraged in mitigation approaches. To date, the anticipated prevalence of deepfakes across societal contexts has motivated a large body of work seeking to develop algorithmic techniques to detect deepfakes26–37. However, these algorithms exhibit low rates of successful detection and are not robust across deepfake types, content format, content characteristics, and datasets20, 38. Parallel efforts to advance user-focused solutions are nascent and characterized by high failure rates39. Recent work indicates that human-machine teams show promise for overcoming these challenges via complementary approaches to identifying deepfakes40–42; these studies, however, don’t account for the social and individual characteristics that modulate individuals’ vulnerabilities to deepfakes. The successful design, development, and deployment of human-machine teams for deepfake detection and other mitigation strategies requires a comprehensive understanding of individuals’ abilities to successfully detect deepfakes, personal characteristics that moderate individuals’ vulnerability to deepfakes, and digital content characteristics that influence successful detection43, 44. The enabling data, however, have not yet been made available.
The detection and mitigation against deepfakes are particularly needed within STEM education given increasing access to and reliance on readily available digital educational content by both youth and adult learners45. To date, work investigating the vulnerabilities of K-12 students to STEM misinformation has tended to focus on deliberately falsified text-based content and media literacy46–50. A limited number of studies have investigated adults’ vulnerability to deepfakes, but these have been limited to deepfakes depicting politicians, how this exposure impacts voters’ attitudes toward politicians depicted in the videos, and how vulnerability can be moderated by personal characteristics (e.g., religious convictions, political orientation), as well as attempted inoculations within these contexts41, 51, 52.
Climate change is a particularly compelling aspect of STEM to explore because the polarized nature of climate change has left this domain particularly vulnerable to digital misinformation. Climate change misinformation outside of deepfakes is pervasive and typically relies on recipients’ motivated cognition to protect against ideologically or economically threatening scientific evidence53, 54 to gain traction. Weak media literacy skills, particularly among K-12 students, has also been shown to moderate susceptibility55. Historically, producing convincing fabricated or manipulated digital content (data, videos, audio, etc.) related to climate change has been much more challenging54. However, the emergence of AI algorithms as a tool for manipulating digital content – particularly to create deepfakes – increases the risk of exposure to convincing climate change misinformation56. Additionally, it’s currently unknown if deepfakes present novel threat vectors that can take advantage of similar vulnerabilities as mainstream climate change misinformation or if deepfakes expand the misinformation attack surface.
To investigate the vulnerabilities of the education system climate change deepfakes, we fielded a survey that embedded a series of randomly assigned authentic or deepfake videos on climate change. We then asked respondents to identify the video as authentic or manipulated and gathered information regarding respondents’ demographics, background knowledge of climate change, learning habits, and perspectives on deepfakes. We found that between one third to over half of survey responses were unable to correctly identify the authenticity of videos regardless of whether the video was authentic or a deepfake. In aggregate, U.S. adults and educators were less likely to correctly identify deepfake videos than authentic videos, while middle school and higher education students were more likely to identify deepfake videos than authentic videos. However, vulnerability fluctuates across individual deepfake videos and can be quite severe. Heterogeneity analyses indicated that an individual’s susceptibility varies as a function of age, political orientation, and trust in information sources. Further, vulnerabilities increased dramatically as individuals were exposed to more potential deepfakes, suggesting that deepfakes can become more pernicious without educational interventions. An analysis of video characteristics that respondents reported drove their decisions indicated that the social context in which deepfakes are embedded could provide a promising approach for educational mitigation strategies. We conclude by discussing the implications of these results on the development of technical and social mitigation strategies for combatting STEM-focused deepfakes.