The current internet-based smart technological advancements, such as wearable technology and mobile phone applications (apps) have enabled constant real-time health monitoring of people in various diseases, and users have become more conscious regarding their health, also about their beloved ones (García- Magariño et al., 2019). Even long before this pandemic, in 2012, researchers reported that almost 59% of US adults searched for health information online. However, the number increased to 75% more recently, with more than a billion health-related searches occurring on Google search engines daily. There is no doubt that individuals are relying more on internet search engines regarding their health-related queries (Rodgers & Massac, 2020). Nevertheless, given a large amount of inaccurate information online, users can quickly become misinformed. For example, there is a misconception found online, where it mentions that eating apricot seeds will cure cancer. Although there is no scientific research report to support the claim; however, it is well established that eating apricot seeds may even cause cyanide poisoning (Swire- Thompson & Lazer, 2020).
Public health disasters and misinformation
Misinformation is fake, unreliable, or not scientifically validated written material regardless of intentional authorship. Most misinformation-oriented discussions have been focused on venomous acts to taint the social media platforms with harmful and inaccurate information. False or fake news, misinterpretation of a drug protocol, or presentation of unrealistic claims have dramatic effects on public health (Southwell et al., 2019). The widespread misinformation, fake news, and rumors draw attention away from the exact information, real public health challenges, and healthcare professionals who are battling against the disease outbreak while protecting, as well as improving treatment modalities, and population health (Southwell et al., 2019). This misinformation and rumors act as a symbolic challenge to administrations combatting any public health crisis or disease outbreak. The flow of biased and inaccurate information can quickly dilute the seriousness of the actual issue and hamper the functioning of healthcare policy, and disaster management (Na et al., 2018).
Health care professionals and administrations faced challenges tackling misinformation, such as fake news and rumor during previous disease outbreaks. Misinformation was also widespread during the primary stages of the human immunodeficiency viruses (HIV) epidemic. It was also plagued by conspiracy theories, rumors, and misinformation for many years, with the effects still visible in regions to this day (Mian & Khan, 2020). At the time of the avian influenza H5N1 outbreak in 2004, the World Health Organization’s (WHO) Western Pacific Regional Office identified around forty rumors. Among them, nine were verified to be accurate (Samaan et al., 2005). During the West African Ebola virus epidemic in 2014, there was widespread fear and attention among the United States-based users, followers of Western media, and social media platforms such as Twitter (Sell et al., 2020).
Misinformation during coronavirus pandemic
The current ongoing pandemic situation of the novel coronavirus disease 2019 (COVID-19) has potentially affected the capacity of health facilities, even in developed countries where there are proven and robust healthcare systems (Pillai et al., 2020). The pandemic has brought unexpected, sudden and unparalleled damages, changes to global health and socio-economic frameworks. For minimizing COVID- 19 spread, most countries have enforced a societal-level lockdown, and citizens have resumed offices remotely and online activities by staying in residences as much as possible (Ding et al., 2020). During the lockdown, people are using internet search engines and social media platforms to gain information about COVID-19. The nature of social media panic's impact varies depending on an individual's gender, age, and level of education. Social media has played a vital role in spreading anxiety about COVID-19 in many territories. The COVID-19 pandemic has been termed as the first social media infodemic (Ahmad & Murad, 2020). At the Italian lockdown period, even near bedtime, people increased the usage of digital media and internet usage (Cellini et al., 2020). In an Italian CoMuNe laboratory, Gallotti et al. (2020) have set up a COVID-19 “infodemic observatory,” where they used artificial intelligence (AI) integrated automated software to follow 4.7 million tweets on COVID-19 streaming past every day. On the other hand, Cellini et al. (2020) reported about 1.3 million posts and 7.5 million comments on COVID-19 from several social media platforms.
Pandemic fear among the population can promote online searches for unproven and unprescribed therapies. If fake news and misinformation spread in an uncontrolled manner, it may become fatal. False news generally travels faster than reliable and authentic reports on social media platforms such as Twitter (Vosoughi et al., 2018; Liu et al., 2020). Many misinformation and rumors spread around online search engines and social platforms regarding COVID-19. Technology entrepreneurs and investors shared a document on Twitter, promoting malaria drug, chloroquine for treating COVID‑19. Many mentioned successful therapeutic outcomes in China and South Korea. Despite that, when high profile individuals, such as entrepreneur Elon Musk promoted chloroquine, it attracted the attention of the mass people, which could lead to personal decision-making (Ball & Maxmen, 2020; Liu et al., 2020). That misinformation circulated rapidly before the results of a small, non-randomized French trial of the related drug hydroxychloroquine; at that time the article was in press (Gautret et al., 2020, p. 105949). Hospitals have reported poisoning cases, where individuals suffered from toxicity from chloroquine containing pills, which they intended to have for COVID-19 (Ball & Maxmen, 2020). At the beginning of July 2020, the WHO released a press note about discontinuing hydroxychloroquine. In the most recently published original article, it has been shown that hydroxychloroquine did not improve clinical status at two weeks, compared with standard care (Cavalcanti et al., 2020).
Moreover, the poisoning of over two thousand Iranians by swallowing methanol took place, since a misleading social media message urged the mass people in Iran to prevent the SARS-CoV-2 infection by drinking alcohol. Around nine hundred illicit alcohol poisoned patients had to get admitted to intensive care unit (ICU) with a fatality of almost three hundred unfortunate deaths (Soltaninejad, 2020). Global coverage about panic-buying in the online media and social networks only served to promote the same behavior, which caused stockpiling of drugs and vaccines as a method of preparation in the COVID-19 pandemic (Arafat et al.,2020).
Technological approaches to control and trace COVID-19
To control and track the COVID-19 trajectory, different governments have implemented various digital health surveillance tools, such as smartphone-based apps for COVID-19 contact tracing (Jalabneh et al., 2020). Some administrations have utilized AI techniques integrated with 5G technology and aerial drone devices for real-time COVID-19 tracking (Hussein et al., 2020a). Although there have been some issues with technical limitations, users’ data privacy and security concerns, these robust surveillance systems are guiding the health care professionals properly (Hussein et al., 2020b).
Machine learning in the battle against COVID-19 misinformation
Researchers have suggested that the US Food and Drug Administration (FDA) must warn the general population against collecting unapproved, unprescribed therapies, and medicines. Google integrated an educational website into search results related to the COVID-19 outbreak, and they mentioned that it could be extended for unapproved COVID-19 medication-related searches (Liu et al., 2020). To identify fake news, scientists have developed machine learning-based false news and misinformation credibility inference models, which form a deep diffusive network model to memorize news articles, writers, and topics (Zhang et al., 2020; Thota et al.,2018; Jain et al., 2019).
Machine Learning (ML) algorithms are the major applications of artificial intelligence (AI). It uses statistics to look for patterns in a massive data set. The data can be numbers, words, images, etc. ML has been implemented earlier to detect spam emails. This is done by analyzing the texts in a message to determine the possibility if it is from an actual person or a mass-distributed word for advertisement, solicitation, etc. Recently, ML is used by Google, YouTube, and others to suggest recommendations based on the history of visited content. The rise in the use of social media spreads misinformation and unproven treatments about infectious diseases such as COVID-19 at lightning speed. An infinite amount of misinformation is available over the internet and it keeps rising day by day. Therefore, ML is one of the potential candidates as a counterstrategy to authenticate a piece of given information automatically.
COVID-19 misinformation in social media and machine learning
To combat the propagation of misinformation, some of the biggest internet and social media companies have introduced technical gatekeepers to control what information can be sent over their networks. Twitter displays warning messages on tweets containing misleading information about COVID-19 and provides an option to direct the user towards reliable and authoritative sources such as the WHO or national health agencies. It also implemented machine learning algorithms to detect accounts that are used to spread misinformation e.g., promoting treatments that are not proven to be effective.
WhatsApp launched a chatbot to connect its millions of users with various fact-checking organizations across the globe (WhatsApp, 2020a). This allows a user to double-check the information. The platform also introduced the WHO’s alert notification. This service responds to public queries about Coronavirus and provides official information 24 hours a day, worldwide (WhatsApp, 2020b). Facebook deployed machine learning algorithms to detect the advertisement of false claims such as homeopathic remedies that can prevent, cure, or protect against the COVID-19 (Mosseri, 2020). It also bans the sale of commercial safety products during the outbreak, such as face masks and hand sanitizers, to ease the panic buying. Instagram, to disrupt the sharing of unverified information on its platform, has used several algorithms to identify and track hashtags that are frequently associated with false or misleading information (Instagram, 2020). Google has introduced “Fact Check Explorer,” which verifies the information available online through authenticated third-party fact-checkers (Google Search Help, 2020). The results of a search query indicate whether the claim from each source is valid, false, partly right or partly wrong. The fact check service is available in multiple languages and is also applicable to validate an image.
As we have mentioned above, based on the scientific literature, to date, scientists have been utilizing surveillance systems for infodemia tracking and analyzing disease outbreak related fake news, misinformation, and rumor spread in online media and social platforms (Ball & Maxmen, 2020). However, there is an immediate need for a tool for notifying any user about the scandal, fake news, and misinformation, while browsing in the web search engines.
To tackle this public health-related misinformation, rumors and raise awareness among the people panicking in the society, a proper screening approach could play a vital role. Using Natural Language Processing (NLP) technique, several sensitive keywords can be set to trigger a warning methodology. Internet Society (ISOC) prefers the word “filtering” over “censorship,” as the latter provides a sharp negative feeling contrast to former means a moderate approach (Seidler & Rabachevsky, 2017). Therefore, we plan to filter or screen the internet contents before showing to the unsuspecting web users. For this reason, a web browser extension can be built to carry out the screening operation.
A web browser extension for a web search engine can be illustrated as a computer programming code package, which could be installed into a web browser that the user primarily uses for searching on the world wide web (WWW). The extension has the capability of adding a new feature to a web browser’s search engine, or augment a current functionality, update a visual theme (Selig, 2012), or in this case, screen a search result content and show warning as per severity. There are several existing browser extensions to block unwanted content (Adblock, 2020) or provide trust ratings (WOT, 2020) for websites. But an extension to create awareness specifically for the public health domain is quite a unique area to be explored.
Here, we are proposing a novel approach to prevent the spread of misinformation through a web search engine extension, which, with the help of ML, will provide valuable information and validity notifications for the user before clicking on any of the search results.