As social beings, we are inherently inclined to calibrate our emotions, thoughts, and behaviours with those around us, especially when we share an affinity with one another through common beliefs, values, and worldviews. Human minds are designed for social cognition and are thus highly adept at a collective behaviour called ‘shared intentionality,’ which is our unique ability to engage in collaborative interactions through which we share psychological states with one another (Gilbert, 1989; Searle, 1995; Tuomela, 1995; Tomasello 2007). In fact, Vygotsky proposed that what distinguishes human cognition from that of other species is not our individual intellectual abilities, but rather to our capacity to cognate collaboratively with others and to enhance our intellectual agility through that collaboration (noted by Tomasello, 1999; Tomasello et al., 2005; Tomasello, Kruger & Ratner, 1993). Our socially oriented mode of thinking is something that we share in common with primates like the chimpanzee, but human attention goes well beyond simply noticing and looking toward what others in our proximity are gazing at; we also want to share our intention with others by ‘being in the moment with them.’ Joint attention is not only about experiencing the same thing with others at the same time, but also about our awareness that we are doing it together (Tomasello, 1995). This unique ability has been critical to the success of our collaborative activities, like using language to communicate, inventing sophistical symbols to perform sophisticated mathematical calculations, inventing complex social institutions to instill greater degrees of intersubjective cooperation, among many other human accomplishments across history. Our ability to share common psychological ground with others has been paramount to our strength, fitness, and success as a species.
The invention of machine learning and generative AI are part of our intellectual and collaboratively inventive legacy as a highly sentient species, but it is a technological invention that, paradoxically, is currently challenging our inherent collective strength as highly cooperative and collaborative beings. Machine learning has been developed and adapted to assess users’ personality according to their digital footprint, and research conducted at the University of Cambridge has found that “private traits and attributes are predictable from digital records of human behaviour” (Kosinski, 2013). The machine learning component of assessing personality is informed by a long-standing technique for measuring our psychology called ‘psychometrics,’ which involves measurement of people’s emotional and mental traits, including intelligence, personality, skills. Psychometric machine learning is designed to make statistical predictions about the relationship between our online activities and our personality, and generative AI factors that data into its own algorithmic protocol when curating content and information that is attuned to our unique personality profile (Heuer, 2020). The integration of these digital activities in online spaces are not inherently problematic, but they become problematic when they are “used to covertly exploit weaknesses in their character and persuade them to take action against their own best interest” (Matz et al., 2017).
Psychometric machine learning and generative AI have become conventional instruments in electioneering toolboxes used for personalizing political communication and for micro-targeting voters based on predictions about the personality. While these digital tactics are certainly disconcerting, they are only part of the explanation for why they are so effective in circulating ‘fake news’ and misinformation while generating filter bubbles and echo chambers. The discrete presence of these digital activities in online environments makes discerning the difference between content created by generative AI and from content produced by fellow human peers exceedingly difficult, and our social brains become vulnerable to the seductive allure of like-minded representations that appear to be aligned with our values, beliefs, and worldviews.
Increasingly, political ideologies and public opinion are shaped online amidst the automated digital processes that operate beneath our awareness on social media platforms like Twitter. Campaigning candidates have capitalized on the strategic value of integrating psychometric machine learning and generative AI into their online electioneering strategies. This approach allows them to inject targeted messaging discretely into online discussions based on predictions about its receptivity generated by machine learning. In Canada, both the Liberal and Conservative Parties have utilized the services of data analytics firms during electoral campaigns to automate the circulation of targeted messaging according to constituents’ unique personality profile (Kingston, 2017). The challenge, of course, is not that we are exposed to the digital electioneering techniques of psychometric machine learning and generative AI, but that our social psychology is designed to respond favourably to others who share common values, beliefs, and worldviews, and generative AI can be designed to simulate human activity within online spaces that are statistically informed by our individualized psychometric profile. The sense of affinity that is cultivated between us and the content curated for us by a generative AI helps calibrate our attention and intention toward a shared political goal or agenda, especially when we experience a sense of belonging together.
This affinity is often referred to as ‘interactive alignment’ among social psychologists, and it is used to describe the ways in which individuals attune their communication behaviours and styles to those with whom they interact (Garrod & Pickering, 2006, 2007, 2009, 2011, 2013, 2014; Gallotti et al., 2017; Menenti et al., 2012; Rassenberg et al., 2020). In fact, our social predilection for interactive alignment is so pronounced that the phenomenon has also been observed in human-computer interactions (Branigan & Pearson, 2006; Branigan et al., 2010; Branigan, Pickering & Pearson, 2003; Shen & Wang, 2023). The significance of this intersubjective behaviour to our everyday interactions and interpersonal success may be most pronounced in the context of a job interview, during which time we are often inclined to mirror the speed, volume, and tone of our interviewer’s speech in effort to strike a connection. We may also adapt our non-verbal and emotional cues, such as eye contact, open body language, and expressing enthusiasm for the company to cultivate a sense of mutuality and trust. We engage in interactive alignment with others so effortlessly it can feel as natural as breathing, and this propensity, anchored in our social psychology, makes us cognitively and emotionally vulnerable to electioneering tactics used on social media that leverage the statistical operations of psychometric machine learning and generative AI.
According to a 2017 Oxford working paper on computational propaganda, McKelvey and Dubois (2019) note that political parties often employ machine learning and generative AI during elections to micro-target constituents with personalized communications, to craft online messaging strategies based on the psychometric profile of targeted voters, and to artificially enhance perceived popularity of specific ideological messages. Political representatives frequently use social media to propagate political ideologies, and politicians are especially active on Twitter. In Canada, 90% of them post tweets monthly, with 55% posting weekly (Hughes, 2019). They often employ bots to disseminate content that supports their agendas or to criticize opposing points of view. On the receiving end, 30% of Canadians aged 18-35 rely on online news sources (Maru Group, 2022), and 62% of those aged 15-24 get news specifically from social media sites (Statistics Canada, 2023). Research from Toronto Metropolitan University’s Social Media Lab (Dubois et al., 2018) reveals that 34% of young Canadians (18-24) and 28% of Canadians (25-34) share their political opinions on social media monthly. Given that social media interactions between politicians and constituents are heavily mediated by AI algorithms, political ideologies emerge at the intersection of political strategy, algorithmic computation, and interpersonal dynamics on social media, fostering online environments that are likely to induce collaborative cognitive among humans and AI-mediated bots.
This paper examines potential instances of interaction alignment between human users and bots – automated software programs designed to engage in online interactions that simulate human behaviour on social media. With a specific focus on Twitter (now operating at X), the central hypothesis investigated in this paper posits that the psychometric machine learning and generative AI underlying bot activity on social media platforms like Twitter compel human users to interactively align with the conceptual value of political content posted by Twitterbots.
It also considers the potential impact of integrating AI technologies for disseminating political information on each citizen’s freedom of conscience and judgment, as outlined in the Canadian Charter of Rights and Freedoms (Canadian Charter of Rights and Freedoms, 1982, s 15). Supreme Court Justice Antonio Lamer, for instance, emphasized that these rights are crucial to our democratic tradition, stating, “…at the heart of our democratic political tradition. The ability of each citizen to make free and informed decisions is the absolute prerequisite for the legitimacy, acceptability, and efficacy of our system of self‑government” (Dickson, 1985: Supra, n. 10 at 353-354; 361-362). In light of this observation, this paper argues that covert manipulation of political thought through algorithmic computations threatens the foundational principles of Canada’s democratic institutions, which uphold the legitimacy, acceptability, and effectiveness of our electoral processes.
Interactive alignment theory has informed research on the influence of computer-generated text on human users but has not yet been used to study the impact of machine learning and generative AI on political thought on social. Moreover, the phenomenological link between interactive alignment and collaborative cognition as drivers of AI-mediated bots’ success in influencing political thought on social media has yet to be fully investigated.
Given that interaction alignment and collaborative cognition are latent variables dependent on quantifiable data, this study employs computational corpus analysis tools to categorize linguistic units in tweets into quantifiable semantic categories. These units are used to assess the degree of association between statistically significant language use patterns embedded in tweets posted on Twitter by humans and bots using the same hashtag. This paper thus seeks to explore the following research questions:
Q1. Do tweets posted by Twitterbots compel human users to interactively align with that content?
Q2. Can interaction alignment theory elucidate the collaborative cognitive processes
between human users and AI algorithms in shaping political discourse on social media?
Q3. What are the ethical implications of AI-mediated political influence on social media and its implications for democratic principles, as framed by the Canadian Charter of Rights and Freedoms?
Given increasing concern that exposure to AI-mediated misinformation online undermines public trust in media and confidence in institutions (Statistics Canada, 2023), empirically analysis exploring these questions using a mixed-methods approach will contribute significantly to existing bodies of literature as well as our general understanding of the political and psychological properties of bot-generated content on social media.