Abstract
Although media exposure measures suggest that only a fraction of online information can be considered as inaccurate, false, or deceptive (e.g., Acerbi et al. 2022), news users across the globe are very concerned about misinformation (Newman et al. 2023). As risk perceptions and threat frames related to misinformation may affect people’s news trust and assessment of trustworthy information (Van der Meer et al. 2023), it is important to explore the relative risk assessment of mis- and disinformation associated with different issues, actors, and sources of information. To better understand how estimates of misinformation are attributed to different domains of information, we therefore take an audience-centered approach to misinformation by exploring relative perceptions of risk: Where do news users think that falsehoods come from?
Crucially, we currently lack refined insights into how people associate misinformation with different information domains across countries with varying levels of resilience to misinformation (e.g., Humprecht et al. 2020). We argue that, to better understand global resilience to misinformation, we need to explore (1) the perceived prevalence of misinformation in people’s media diets, (2) the domains associated with higher and lower levels of misinformation salience; (3) country-level differences in risk assessments. This study relies on representative survey data collected across seven countries, representing a diverse sample of democracies characterized by different levels of perceived risk perceptions (Knuutila et al. 2022) and resilience to actual misinformation (Humprecht et al. 2020): Argentina, Brazil, Chile, Mexico, United States, Spain, and The Netherlands. Considering that the prevalence of misinformation may be largely contingent upon platform, issue, modality, and other contextual factors (e.g., Brennen et al. 2021; Yang et al. 2023), we do not only map general risk perceptions but also map domains associated with lower or higher rates of misinformation.
Moving beyond the Western focus central in most prior studies on misinformation, our comparative study aims to map risk perceptions related to misinformation salience across countries in the Global North and South. By moving beyond general risk perceptions, we arrive at refined insights into the extent to which people across different countries associate misinformation with less-resilient informational domains, such as unverified and ungated content on social media or polarizing issues. This comparative endeavor lays groundwork for the further assessment of the resilience of different countries to mis- and disinformation. Combined with more refined exposure data, it aims to inform the assessment of the proportionality of risk perceptions across domains of information.
Theoretical Framework
Beyond an Informational Disorder: Misinformation as a Perceptual and Discursive Threat
Moving beyond the lack of facticity in misinformation, extant literature makes an important conceptual distinction between misinformation and disinformation (e.g., Chadwick and Stanyer 2022; Hameleers 2023; Wardle and Derakhshan 2017). Misinformation refers to false or inaccurate information in general. Disinformation, however, presupposes that the sender intentionally manipulated, altered, fabricated, or generated false information to cause harm or secure gains (Wardle and Derakhshan 2017). The distinction between misinformation and disinformation is difficult to make based on an analysis of isolated narratives on their own. Hence, the intentional dimension of disinformation may only be deduced based on a comprehensive analysis of the context of disinformation, such as considering strategic contexts (i.e., elections), financial structures (i.e., advertising strategies), and the potential for gain in given contexts of deception (Hameleers 2023). To explore whether people perceive this difference, this article first of all focuses on perceptions of misinformation and disinformation related to the perceived prevalence of false versus intentionally deceptive information in people’s overall information diets.
Despite popular concerns about mis- and disinformation (e.g., Newman et al. 2023), empirical research has yet to offer support for high levels of mis- and disinformation in people’s (online) newsfeeds (e.g., Altay et al. 2023). Meta-analyses and systematic literature reviews estimate that mis- and disinformation make up less than 1 percent of people’s media diets (e.g., Acerbi et al. 2022). These numbers should be interpreted with caution because they are mostly based on analyses of public online media diets of people in the Global North (e.g., Acerbi et al. 2022), largely neglecting countries in the Global South that may be more vulnerable to mis- and disinformation (Humprecht et al. 2020). Moreover, existing analyses focus mostly on textual disinformation, neglecting the abundance of visual disinformation narratives on social media (Weikmann and Lecheler 2023). This already indicates how difficult it is to comprehensively assess the prevalence of mis- and disinformation, and the crucial role that variations in domains, such as modality and platform, may play in the identification of false information.
Survey data collected across different countries suggests that people may arrive at comparably high estimates of misinformation’s prevalence, which may be fueled by public, media, and academic attention to disinformation and the weaponization of the term (e.g., Van der Meer et al. 2023; Waisbord 2018). Cross-country studies suggest that people’s estimates are affected by national contexts—and cultural differences in particular (Knuutila et al. 2022). The potential political consequences of risk perceptions related to mis- and disinformation—including avoidance of established information sources, selection of hyper-partisan alternative news sources, or even engagement in undemocratic behaviors (e.g., Hameleers et al. 2020)—underscore the societal relevance of focusing on perceived risk perceptions related to false information.
Given that large-scale survey research indicates that 56 percent of all participants are (very) concerned about discerning true from false information (Newman et al. 2023), it is crucial to break this large number down and map risk perceptions related to false information across different informational domains. This study conceptualizes risk perceptions as the estimated proportion of misinformation or disinformation in people’s media diets across different domains of information. Concretely, participants in the survey were asked to estimate how much of the information they encountered via different media types, modalities, topics, and communicators contains inaccurate or deceptive information. Although we acknowledge the difficulty of this task for news users, we are mainly interested in the
To measure the difference between misinformation and disinformation in people’s risk perceptions, we distinguish between the estimated proportion of generally false information versus deceptive or intentionally false information (i.e., caused by the intention to hide reality for political gain) (e.g., Hameleers and Brosius 2022). However, our aim is not to classify risk perceptions related to misinformation and disinformation as either accurate or inaccurate, but rather to explore the relative assessment of risk across domains of information and national contexts. We raise the following specific research question: To what extent do people associate their information environment with mis- and disinformation? (RQ1).
Domains of Misinformation: Identifying Risk Perceptions Across Low- and High-Risk Information Domains
We understand informational domains as an umbrella-term for contextual variances, discursive embeddings, formats or modalities, and issues that may either be more or less vulnerable to inaccurate or false content. Similar to domain-level analyses of misinformation exposure (e.g., Acerbi et al. 2022; Grinberg et al. 2019), we understand domains as broader than the context offered by platforms or informational sources (i.e., hyper-partisan media platforms). Hence, domains may either refer to content features in which misinformation is embedded (i.e., the modality or issue covered), the actors that present it (i.e., political actors or ordinary citizens), or sources involved in its dissemination and platforming (i.e., social media).
Although we based our domain-level inventory on an extensive literature review (e.g., Acerbi et al. 2022; Damstra et al. 2021; Yang et al. 2023), mapping the proportion of mis- and disinformation relative to all information is an extremely difficult task plagued with validity and reliability issues (see, e.g., Altay et al. 2023; Yang et al. 2023). Therefore, we simplified domain-bound resilience to mis- or disinformation by distinguishing between higher- and lower-risk domains of misinformation prevalence. Here, a low-risk domain means a setting (i.e., media platform, issue, actor, format) in which the likelihood to encounter misinformation is likely to be lower compared to a high-risk domain. We do, however, assume that misinformation is present to a certain degree across all platforms. Hence, false information due to “honest mistakes” is inevitable when attempting to cover news events (Hameleers et al. 2020), whereas disinformation may be a rarer event that is tied to specific platforms and contexts of information (e.g., Acerbi et al. 2022; Damstra et al. 2021).
In mapping perceived misinformation exposure across lower- and higher-risk domains, we distinguish between media sources or platforms, actors spreading (false) information, topics and issues associated with misinformation, and the formats in which information is presented. For these specific domains of misinformation, we decided not to distinguish between misinformation and disinformation perceptions. To not overburden or confuse participants in the detailed estimates of misinformation salience across all various contexts, we rather used a general instruction asking participants to rate the relative prevalence of false and/or misleading and deceptive information for different sources, actors, issues, and formats.
As there is a lack of (comparative) empirical research on the actual proportion or relative risk of encountering misinformation across all relevant domains, we leave it to future research to further validate the distinction between the tentative categories of lower- and higher-risk domains. Based on the theoretical inventory presented in Table 1, we forward the following exploratory research question: To what extent is misinformation differentially associated with high- and low-risk information domains? (RQ2). The sections below offer a literature review of the extent to which different domains are expected to be more or less resilient to misinformation.
Hypothesized Vulnerability and Resilience to Misinformation Across Communicative Domains.
High-Risk Misinformation Sources
We postulate that established media sources are less likely to contain misinformation than alternative or social media (see, e.g., Bennett and Livingston 2018; Guess et al. 2023; Hameleers and Yekta 2023; Tucker et al. 2018). Not bound by traditional journalistic norms, social media allow for an ungated participatory discourse and therefore pave the way for the dissemination of disinformation (e.g., Tucker et al. 2018). Alternative and hyper-partisan media platforms offer a form of counter-knowledge to the mainstream or established facts (e.g., Bennett and Livingston 2018; Heft et al. 2021). Despite referring to experts, empirical evidence or other seemingly legitimate sources of objectivity, these alternative platforms often lack proper contextualization, whereas they voice delegitimizing truth claims without empirical backing (Hameleers and Yekta 2023).
While hyper-partisan alternative media should not be equated with disinformation, and mainstream media may also occasionally disseminate false or deceptive content (e.g., Hallin 2004), we tentatively postulate that social and alternative media sources are more likely contexts for encountering misinformation than established or conventional information channels. Importantly, people may use the same social media platforms for different reasons. Some may use it to access news aggregated by established journalists, whereas others may use it as a gateway to alternative hyper-partisan platforms, or opinion leaders that oppose established journalistic routines. Although our analyses do not allow us to differentiate between these different motivations and behaviors, we nonetheless postulate that social media settings generally carry a higher risk of misinformation. For example, even when people access regular news via social media, they may be exposed to comments, ads, and embeddings that are false or deceptive, for instance, through astroturfing practices (Zerback et al. 2021).
High-Risk Misinformation Actors
In extant literature, disinformation has mainly been associated with the radical right-wing (e.g., Bennett and Livingston 2018; Marwick and Lewis 2017). Right-wing populists are associated with disinformation as they often voice a destabilizing anti-establishment message that resonates with the aims of disinformation: raising cynicism, increasing polarized cleavages, and attacking mainstream or conventional information to sow discord (Hameleers 2023). However, despite the strong focus on right-wing populism in the literature, other political actors may also be associated with disinformation. Hence, politicians that directly communicate their views and positions to the audience via social media may be likely to use deception to achieve their political goals—especially in polarized contexts where negativity and attacking the opponent is an important communication tactic (Lau & Rovner, 2009). Political actors in general may profit from not informing voters in an accurate and complete manner, which makes disinformation a likely strategy for politicians to use to either create a more favorable image of the self or a more negative image of the other (Hameleers 2023). Consequently, we consider that political actors—across contexts—may be likely to spread mis- and disinformation as they have a clear incentive to deceive voters for political gain (also see Hameleers 2023). Since foreign influence and political biases have different meanings across the different countries included in our study, we focus on political actors in general to allow for comparability.
We also consider social media influencers who communicate political views without clear political affiliations as likely sources of disinformation (see, e.g., Marwick and Lewis 2017; Starbird, 2019). Influencers can be regarded as opinion leaders who create and disseminate content to an unknown audience of followers (Gräve 2019). Although political actors and former politicians can also have a large following, we do not consider them as influencers as they have a clear political affiliation. In participatory disinformation networks, influencers play a role in the spread of disinformation due to their large number of followers, and their direct influence on public opinion and citizens’ behaviors. Influencers often aim to persuade rather than inform their followers, which on a political level has been connected to the endorsement of false information (e.g., Harff et al. 2022). Influencers can make it seem as if disinformation narratives are acceptable opinions representative of ordinary citizens’ beliefs about certain issues, making them an important source to consider in the disinformation ecosystem (Lukito 2020). Together, we tentatively postulate that political actors—both on the left- and right-wing—and social media influencers can be regarded as high-risk actors when it comes to the dissemination of misinformation.
High-Risk Misinformation Topics
According to extant literature, polarized and strongly ideologically divided issues are more likely to be surrounded by false information than less-salient and less-politicized issues (e.g., Freelon and Wells 2020). In addition, a lack of expert agreement or conflicting evidence on certain topics could also cause misinformation to be prevalent. When emphasizing the risks of misinformation, extant literature has mainly identified immigration (Humprecht et al. 2020), climate change (e.g., Lewandowsky et al. 2012), COVID-19 (e.g., Brennen et al. 2021), the Russian invasion of Ukraine, terrorism and armed conflicts (e.g., Erlich and Garner 2023), criminality (Hameleers 2023), and LGBTQIA+ issues (Billard et al. 2023) as issues surrounded by high levels of misinformation. Although this may partially reflect the selection bias or regional focus of existing research, these are arguably highly politicized issues, often surrounded by antagonistic narratives. Although elections may also be considered as a high-risk context for misinformation, the different topics included here are often central to political communication in electoral settings. We therefore focus on polarized topics often central in political debates around elections.
High-Risk Misinformation Formats
Finally, we respond to the urgent calls to focus on nontextual forms of disinformation and to incorporate visual communication in disinformation studies (e.g., Weikmann and Lecheler 2023). Specifically, an automated content analysis revealed that 23 percent of all scraped visual communication on Facebook can be classified as false (Yang et al. 2023). Beyond visuals, political advertising, opinion pieces (e.g., Lukito 2020), memes (Brennen et al. 2021), and social media messages disseminated by politicians (e.g., Marwick and Lewis 2017) are additionally regarded as high-risk formats of misinformation. While not strongly tied to (journalistic) principles of objectivity, balance, and impartiality, these formats of communication may nonetheless appear authentic because they offer a direct indication of reality (i.e., ordinary citizens voicing their opinions, or visuals directly portraying an event) which allows them to profit from the realism heuristic associated with nontextual modes of disinformation (Sundar 2008).
Concerns About Mis- and Disinformation Across the Global North and South
Concerns and risk perceptions related to mis- and disinformation vary across countries. The 2023 Reuters Digital News Report points to a large difference between Europe and the Americas: Participants in Europe are least concerned about mis- and disinformation (53%), whereas this increases to 62 percent in Latin America and 65 percent in North America (Newman et al. 2023). In the Netherlands, only 30 percent said to be concerned about whether online news is fake or real, whereas this increases to 85 percent in Brazil. Likewise, the prevalence of mis- and disinformation as informational threat may largely differ across country contexts, as postulated in the resilience to misinformation framework (e.g., Humprecht et al. 2020). Polarized settings and countries with lower media trust (i.e., United States and Brazil) may be more vulnerable to mis- and disinformation than countries characterized by higher media trust and lower levels of polarization (i.e., the Netherlands) (see Humprecht et al. 2020). Consequently, we focus our endeavor on seven countries that likely provide different contexts when it comes to resilience to mis- and disinformation: Argentina, Brazil, Chile, Mexico, United States, Spain, and The Netherlands.
Arguably, our country selection is biased toward Latin American countries, while other countries with low media trust or other factors corresponding to low resilience are omitted (i.e., India, Iran, or China). We focused on Latin American countries for different reasons. First, the problem of disinformation is on the rise in Latin America, affecting public debate and electoral outcomes (e.g., Znojek 2020). Second, Latin American countries have a long history of populist movements (e.g., De la Torre 2017). Although we regard populism as a transnational phenomenon that has affected the political landscape of all countries included in this study, Latin American countries have a longer history of people-centric forms of left-wing populism (e.g., De la Torre 2017) than European countries, such as the Netherlands (Aalberg et al. 2017). Together, these contexts thus offer an important contrast to other settings of low resilience based on the rise of populism.
The resilience framework may also have implications for the low- versus high-risk contexts identified in Table 1. For example, the extent to which established media channels may not disseminate disinformation may be largely contingent upon press freedom and the status of journalism in the respective country (Humprecht et al. 2020). For example, in Mexico, lower press freedom and attacks on journalists critical of the established order indicate that established media in Mexico are less free in performing their professional and independent roles, which enhance the likelihood of spreading of mis- and disinformation. Indeed, in the included countries from the Global South, press freedom is lower than in the Global North (Newman et al. 2023; RSF 2023). Mexico is ranked on place 128/180 when it comes to press freedom, whereas the Netherlands ranks sixth (RSF 2023). Additionally, political polarization varies significantly among the countries studied, with higher polarization in Argentina, Brazil, and the United States, and lower polarization in the Netherlands and Spain (e.g., Humprecht et al., 2020). Finally, at the time of data collection, many democracies were confronted with the upsurge of populist movements that cultivated distrust in the established political order, science, and the mainstream media, which may offer a context for higher estimates of misinformation in these countries, such as Brazil and Argentina. In Table 2, we present a context-dependent overview of risk-factors related to mis- and disinformation based on the seven countries included in this endeavor.
Country-Level Indicators of Resilience to Threats of Mis- and Disinformation.
Methods
Our comparative survey study measured perceptions of mis- and disinformation in Argentina, Brazil, Chile, Mexico, United States, Spain, and The Netherlands. Data collection was conducted in October 2023. The survey received ethical approval from the University of Amsterdam under number FMG 48142023. The researchers developed a template survey in English, which was translated into Spanish, Portuguese, and Dutch by native speakers. The formulation of questions in the survey was kept generic to account for regional differences and allow for comparative analyses.
Data Collection Procedures
The survey instrument was programmed in Qualtrics. In the first question blocks, participants were asked about their political preferences and media use. Next, in the main question block, participants estimated the amount of mis- and disinformation they encountered across various contexts. The final block of the survey contained questions on participants’ demographics.
Sample
The recruitment of participants was outsourced to Dynata. Although panelists were invited at random via e-mail, we enforced soft quota to ensure that the samples across countries approached representativeness related to the variables age, gender, and education. Based on an a priori power analysis, we aimed to achieve 450 completes per country. This was achieved in the actual recruitment procedure. The total number of completes was 3,718: Argentina (
Measures of the Estimated Prevalence of Misinformation and Disinformation
As we were interested in the proportion of false information in people’s media diets, we reformulated existing measures about risk perceptions to fit the study’s aim to arrive at a relative risk assessment. (The specific wording of this item and other items is included in Supplemental Appendix A.) In our measures, we understood misinformation as an umbrella-term capturing perceived exposure to false and inaccurate information in general, whereas we defined disinformation as a more specific form of false information in which the intention to deceive was central (also see, e.g., Altay et al. 2023). The scale on which participants estimated the percentage of mis- and disinformation exposure ranged from 0 to 100 (misinformation:
For the domain-specific estimates, we refrained from asking about personal encounters with misinformation as participants may not use these different domains to similar degrees. Hence, people may not encounter any misinformation on certain domains or issues as they do not expose themselves to these domains in the first place. Therefore, to assess the perceived risks of misinformation across domains, we rely on participants’ estimates of the vulnerability of different domains to the issues of mis- and disinformation. We assessed that there are no substantial or significant differences in answer patterns across the more generic versus domain-specific estimates that relied on a different item wording (i.e., in terms of missing values, outliers, normal distribution, or completion times).
Misinformation Across Sources
We asked participants to estimate the amount of misinformation spread by different media sources. The scale again consisted of a slider ranging from 0 to 100, and participants had to rate several sources. We did not mention specific brands for the wider categories of news as these depended strongly on country contexts. Based on extant literature, alternative and all social media platforms were considered high-risk settings of misinformation (e.g., Hameleers and Yekta 2023; Heft et al. 2021). Yet, it should be acknowledged that people may encounter alternative, hyper-partisan, and regular news via the same social media channels (i.e., Twitter/X or Facebook). Although domain-specific analyses on the actor- and format-level may offer a more refined analysis of these differences, our assessment of the source-level neglects the specific information that is encountered via social media. Traditional media, adhering more closely to journalistic norms, gatekeeping, and fact-checking (e.g., Hameleers and Yekta 2023) were characterized as low-risk settings of encountering misinformation. We constructed an average scale for perceived misinformation prevalence in low-risk sources (Cronbach’s alpha = 0.873) and high-risk sources (Cronbach’s alpha = 0.894).
Actors of Misinformation
Based on extant literature (e.g., Bennett and Livingston 2018; Marwick and Lewis 2017; Waisbord 2018), we considered the following actors as high-risk actors of misinformation: Politicians in general, left-wing politicians, right-wing politicians, and social media influencers. Given the more neutral, objective, and unbiased role perceptions associated with the government and international organizations, we classified these sources as comparatively lower in risk. The same applies to ordinary citizens: They should have a lower stake in influencing and deceiving others, and their communication is more likely to be characterized by an honest demeanor (Levine 2014). We constructed an average scale for perceived misinformation prevalence by high-risk actors (Cronbach’s alpha = 0.783) and low-risk actors (Cronbach’s alpha = 0.734).
Topics of Misinformation
Topic such as immigration, climate change, COVID-19, the Russian invasion of Ukraine, terrorism, vaccines, criminality, and LGBTQIA+ were regarded as high-risk topics of misinformation based on extant literature connecting misinformation to partisan and polarizing issues (e.g., Bennett and Livingston 2018; Marwick and Lewis 2017), specifically in the context of COVID-19 (Brennen et al. 2021) and armed conflicts, which may be surrounded by propaganda spread by warring sides (Seo and Ebrahim 2016). As the housing market, economy, and food safety have been associated less with partisan misinformation and polarization, we tentatively regard them as lower-risk issues of misinformation. Again, we created a scale averaging misinformation salience perceptions for lower-risk (Cronbach’s alpha = 0.875) and higher-risk issues (Cronbach’s alpha = 0.916).
Formats of Misinformation
Finally, participants estimated the relative amount of misinformation spread across formats of information and styles of argumentation (i.e., empirical evidence). The following formats of information were rated by participants: Textual information presented as news, opinion pieces of celebrities, expert references, scientific evidence, images of warzones, videos of political speeches, memes on social media, expert claims on social media, images that contradict conventional facts, political advertising, social media messages expressed by politicians, and scientific reports. Based on extant literature, we specifically considered all forms of visual content, opinion pieces, memes, political advertising, and social media messages expressed by politicians as high-risk settings of misinformation (e.g., Yang et al. 2023). Again, two different scales were constructed: lower-risk misinformation formats (Cronbach’s alpha = 0.901) and higher-risk misinformation formats (Cronbach’s alpha = 0.872).
Analyses
First, we compared the estimated prevalence of misinformation and disinformation across all seven countries (Analyses of Variance, ANOVAs) with country as the independent variable and average levels of mis- and disinformation proportions as dependent variables (RQ1). To answer RQ2, which asked about the difference of perceived misinformation salience across low- and high-risk domains, we rely on within-subjects mean score comparisons of high-risk versus low-risk contexts across sources, actors, topics, and formats. We not only report findings on the aggregate level but also check for country-level differences in the allocation of misinformation across contexts.
Results
The Prevalence of Misinformation Perception Across Countries
To map perceptions of misinformation prevalence across all countries (RQ1), we compared the mean misinformation prevalence perceptions across all seven countries included in the study (see Table 3 for the descriptives and pairwise comparisons across countries). Across all countries, participants on average perceived that more than 50 percent of all information they encounter is misinformation (
Mean Perceived Misinformation and Disinformation Prevalence Across Countries.
To explain this relatively high perceived base rate of misinformation, we explored the open-ended answers that participants gave to explain their concerns (also see Supplemental Appendix B for more detailed in-depth findings). Participants emphasized the difficulty of discerning between true and false information, particularly on social media given the ease with which factual claims can be deceptively legitimized with evidence, the self-identification of “experts,” or the unclear distinction between facts and opinions. Next to the perceived difficulty of the task, participants also claimed that misinformation was all around, and that everything could potentially be mis- or disinformation. The open-ended answers provide a valuable context for the high perceived proportion of misinformation reported in the survey. On the one hand, the prominent sentiment of uncertainty about truth discernment may explain that people estimate misinformation to make up about half of all information: Everything can be true or false, and the distinction is difficult to make. On the other hand, many participants emphasized the consistency of mis- and disinformation flows, and the difficulty to escape it when finding information online.
To offer more insights into individual-level differences in perceived misinformation and disinformation, we also explored how left-right self-placement, media trust, and established media use related to risk perceptions of encountering misinformation and disinformation. First, we see that more right-wing participants (
Country-Level Differences in Risk Perceptions
There are some noteworthy country-level differences in risk perceptions (see Table 3). In the Netherlands, the perceived prevalence of both mis- and disinformation is significantly and substantially lower than in the other countries. In Mexico, however, the average amount of mis- and disinformation perceived by participants is highest. In most countries, participants’ estimates of the amount of misinformation versus disinformation are similar. There are two noteworthy exceptions. First, participants in Brazil perceive that their information environment contains more intentionally deceptive disinformation (
The Domain-Specific Nature of Perceived Misinformation Prevalence
Higher- Versus Lower-Risk Sources
To answer RQ2, we compared mean scores of perceived misinformation prevalence across lower- versus higher-risk domains on the level of sources, actors, topics, and formats. First, there is a significant difference between misinformation prevalence in the context of low-risk information sources (
To inspect country-level differences in the differentiation between lower- and higher-risk sources, we constructed a new variable that taps the mean difference between higher- versus lower-risk country contexts. We then used a one-way ANOVA to explore differences across countries (
Cross-Country Differences Between High-and Low-Risk Domains of Misinformation.
Higher- Versus Lower Risk Actors
There is also a significant difference between misinformation prevalence in the context of low-risk actors (
Higher- Versus Lower-Risk Topics
Aggregate analyses show that lower-risk topics (
The finding that the differences are not as clear-cut in Argentina compared to other countries potentially hints at the sensitivity of classifying topics as lower- versus higher-risk across different national settings. Although we coded the economy in the broadest sense as a relatively lower-risk topic than issues such as the Russian invasion of Ukraine, this was not reflected in the beliefs of participants in Argentina, who were substantially more inclined to associate the economy (
Higher- Versus Lower-Risk Formats
When comparing the average perceived prevalence of misinformation across higher- and lower-risk domains related to the format and modality of information, we see a clear difference in the anticipated direction: Participants are more likely to associate high-risk formats and modalities (i.e., images of warzones, political speeches) with misinformation (
Again, there are some noteworthy differences between countries (see Table 4). In Argentina, the difference between lower- and higher-risk settings was highest. Especially if we look at all formats individually, we find a high discrepancy between political advertising (
Together, our findings clearly emphasize that risk perceptions of misinformation prevalence, even when distinguishing between formats, sources, topics, and actors (RQ2), are highly contingent upon regional differences. In cases such as Argentina, where the economy was severely threatened at the time of data collection, while a radical populist celebrity politician fueled cynicism toward the political elites, misinformation perceptions concentrated on political campaigns and the economy.
Discussion
The findings from our comparative survey on risk perceptions related to misinformation indicate that, on average, participants believe that about half of all information they encounter is mis- and disinformation, supporting the position that perceptions of misinformation exposure are relatively high (Altay et al. 2023). Such relatively high estimates resonate with alarming messages on the high salience on mis- and disinformation prominent in public, political, and media discourses. On a more positive note, the relative assessment of risk across domains reveals that people do discern between lower- and higher-risk domains of misinformation. Although misinformation is a dynamic concept that is difficult to grasp, while the boundaries between domains and the variety within them are also hard to comprehend, news users in general make a distinction between information domains in their misinformation estimates.
It is beyond the aim of this article to assess the proportionality of concerns about misinformation. Although we indirectly compared estimated misinformation prevalence to existing content analyses (e.g., Acerbi et al. 2022; Yang et al. 2023), validity issues persist in empirical endeavors that aim to map misinformation exposure. Mis- and disinformation can include more than easily identifiable fingerprints, and even honest information sources can contain inaccuracies (Altay et al. 2023). Additionally, extant research has not clearly focused on the identification of mis- and disinformation across information contexts and national settings and has conflated misinformation with other forms of problematic content, such as hyper-partisan news (Acerbi et al. 2022).
In line with the resilience hypothesis (Humprecht et al. 2020), our findings show higher-risk perceptions in less-resilient democracies. Especially in American countries with stronger threats to press freedom and polarized settings, such as the United States and Brazil, people perceive misinformation to be very prominent across sources, issues, actors, and information formats. In countries that should offer a more resilient setting, for example, due to high levels of press freedom, lower polarization and higher media trust, perceived misinformation prevalence is considerably lower (i.e., in the Netherlands and Spain).
This contextual difference is also reflected in the distinction between misinformation and disinformation made across different settings. In Brazil, one of the most vulnerable settings included, intentionally deceptive information is seen as more prevalent than misinformation in general. The opposite is true in the Netherlands, which has been classified as a resilient democracy (Humprecht et al. 2020). Extending the resilience framework, then, the comparison between perceived misinformation and disinformation exposure may reveal important vulnerabilities in a time characterized by a crisis of trust and factual relativism (Van Aelst et al. 2017). Whereas perceptions about inaccurate reporting and honest mistakes (misinformation) may be part of skepticism conducive to a well-functioning democracy, more pronounced beliefs that most of the information is intentionally deceitful may point to more undermining beliefs of citizens (Hameleers et al. 2020).
Understanding global mis- and disinformation across discursive, perceptual, and informational dimensions requires considering the sociocultural and political contexts of different countries. For instance, as the Argentinian case shows, social, political, and economic factors may play a role in the perception of certain topics as higher- or lower-risk than generic theoretical arguments might suggest. Similarly, inductive qualitative analysis of open-ended questions revealed that risk perceptions about misinformation are also associated by respondents—particularly in Chile, Mexico, and Brazil—with other digital communications risks, such as online fraud. These findings are consistent with research showing that Internet users in Latin America and the Caribbean and North America are especially worried about technology-related risks including misinformation, fraud, and harassment (Knuutila et al. 2020), and might suggest additional insights into the relatively high-risk perceptions in context-sensitive ways. Hence, we should not generalize our mostly Western perspective on the core features of disinformation to different settings, but rather explicate the facilitating factors or impediments at play for mis- and disinformation to thrive across settings.
This study has various limitations. First, we currently lack empirical evidence on the estimated prevalence of actual misinformation and disinformation across the contexts studied in this article. Although different content analyses in Western Europe and the United States show that misinformation’s online presence may be lower than 1 percent (e.g., Acerbi et al. 2022), data for all included countries are unavailable. Even more so, existing research may be plagued with selection biases and the general lack of a universal ground truth as a reference category for the identification of misinformation. Future research should rely on study designs in which people’s media exposure can be linked to content analytic data of their actual media exposure—which can be fact-checked to map the dose of actual disinformation exposure.
Second, although we responded to urgent calls to study (perceived) misinformation beyond the Global North, we have neglected important areas in our study, most notably Asian countries that are home to substantial proportions of the world population. Other vulnerable contexts may include Iran, China, and India. Yet, including these countries would pose different challenges, including difficulties in securing public opinion data via independent routes. Although case selection was driven by a literature review, this may be biased by inclusion criteria of published research on misinformation, reflecting a Western bias. This emphasizes the importance of problematizing the high- versus lower-risk typology across different regional settings and regime types, as research has suggested people in autocratic regimes have lower-risk perceptions about misinformation (Knuutila et al. 2022). A third limitation is the omission of foreign actors as high-risk sources of misinformation. Although research has regarded foreign influence as central to disinformation campaigns affecting (Western) democracies (see Bennett and Livingston 2018), we decided not to focus on these actors as foreign means different things across the included countries. We suggest future research to include risk perceptions related to disinformation campaigns orchestrated by prominent foreign actors, such as Russia or China.
Fourth, we developed generic categories of higher- and lower-risk domains, without distinguishing between the motivations or concrete behaviors associated with certain sources or platforms. Although we generally consider social media to correspond to a higher risk due to its affordances and lack of gatekeeping, future research needs to distinguish between different usages of social media. More interpretative approaches can, for example, more precisely assess on which locations on social media people perceive misinformation to be prevalent.
Finally, estimating the prevalence of misinformation is challenging. News users also conflate mis- and disinformation with various other information disorders, such as biased information or hyper-partisan media (e.g., Kyriakidou et al. 2023; Nielsen and Graves 2017). Inductive analyses showed that people do not only regard inaccurate or false information as misinformation, but also that these overlap with other technology-related risks, such as fraud (Knuutila et al. 2022). Although our findings allow us to compare risk perceptions across different information domains, we suggest future research to (1) clarify what is meant with mis- and disinformation by news users and distinguish it from other issues; (2) use various less complex and more concrete tasks to explore how much misinformation people encounter daily (i.e., Experience Sampling Method or diary studies).
Supplemental Material
sj-docx-1-hij-10.1177_19401612241304050 – Supplemental material for Risk Perceptions of Misinformation Exposure Across Platforms, Issues, Modalities, and Countries: A Comparative Study Across the Global North and South
Supplemental material, sj-docx-1-hij-10.1177_19401612241304050 for Risk Perceptions of Misinformation Exposure Across Platforms, Issues, Modalities, and Countries: A Comparative Study Across the Global North and South by Michael Hameleers and Marie Garnier Ortiz in The International Journal of Press/Politics
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is supported by an NWO XS grant from the Dutch Science Foundation (NWO).
Supplemental Material
Supplemental material for this article is available online.
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
