Abstract
Keywords
The rapid development of artificial intelligence (AI) has ushered in a new era in the production, dissemination, and retrieval of information across digital media environments (e.g., Broussard et al., 2019; Dan et al., 2021; Guzman & Lewis, 2019; Li, 2023; Sundar, 2020; see also Gil de Zúñiga et al., 2024). Generative AI technologies such as
In this
The Democratic and Epistemic Challenges of AI Disinformation
Throughout the world, people are raising the alarm about AI-driven disinformation. At the beginning of recent electoral campaigns in the US, Latin America, and India, for example, the
These affordances lower the barriers for malicious actors and allow targeted deception at an unprecedented scale. AI also supports automation in the form of bots, fake accounts, and synthetic personas that can amplify false messages across social media ecosystems. A particularly stark example comes from Romania’s 2024 presidential election, where disinformation campaigns seriously disrupted the democratic process. Thousands of fake social media accounts spread coordinated false narratives via TikTok, Telegram, and Discord, while cyberattacks undermined confidence in election infrastructure. While direct evidence of AI deployment remains inconclusive, the rapid pace and large scale of content production and account automation suggest that AI-powered tools may have been involved. This case highlights the serious risks that arise when social and technological vulnerabilities converge. It also illustrates how AI can become a key enabler of large-scale electoral interference and a tool for amplifying information pollution and undermining the democratic process.
Despite the widespread resonance of the alarm regarding AI, empirical evidence on the alleged unprecedented undermining influence of AI-disinformation, such as deepfakes, has not mirrored the abovementioned conclusions of the WEF (e.g., Dobber et al., 2020; Hameleers et al., 2024; Vaccari & Chadwick, 2020). Much like the exaggerated claims about the (misunderstood) capabilities of AI more broadly (Narayanan & Kapoor, 2024), some have emphasized that AI is an ordinary technology whose role in disinformation is frequently overstated or sensationalized without conclusive evidence (Simon et al., 2023; see Jungherr & Schroeder, 2021) and that the effects of AI on elections worldwide remain limited (Simon & Altay, 2025).
Is extant research overlooking the ways in which AI has impacted the disinformation landscape, or are societal concerns about AI-powered disinformation overblown? We argue that it is currently too early to make a definitive judgment about whether concerns related to AI and disinformation are warranted or exaggerated. However, developments in the field of generative AI are advancing rapidly, while societies may simultaneously be becoming more vulnerable to disinformation—due, for instance, to rising epistemic uncertainty and distrust in traditional institutions such as legacy media (e.g., Newman et al., 2025; see also Bartsch et al., 2025).
At the same time, voicing unsubstantiated fears about the impact of AI in disinformation may not be without negative effects. Indeed, labeling content as disinformation (Egelhofer et al., 2022) or deepfakes in particular (Hameleers & Marquart, 2023) is found to have a spill-over effect on people’s trust in news in general, or their credibility rating of authentic and accurate information.
Overall, we contend that scholarly engagement with AI-driven disinformation is still in its early stages. This special issue represents an initial step toward a more systematic and nuanced exploration of the phenomenon. Our objective is to move beyond a frequently binary discourse—on the one hand, dystopian warnings of an AI-fueled “information apocalypse,” and, on the other, deep skepticism as to whether AI-generated disinformation constitutes a problem of any real significance. Rather than asking whether AI is inherently exacerbating existing challenges or whether prevailing concerns are overstated, we argue for the need to develop a more grounded and evidence-based understanding of its impact on the disinformation landscape. This requires deeper empirical and theoretical inquiry to assess the role of AI within the broader context of an increasingly complex and evolving global
Toward a Definition of AI Disinformation
Before exploring the specific challenges posed by AI-driven disinformation, its potential for effective intervention (e.g., Feuerriegel et al., 2023), and how it compares to other forms of disinformation in terms of causes and consequences, it is helpful to establish clear boundaries and offer a working definition to guide the discussion. In Table 1, we compare traditional forms of disinformation with AI-driven disinformation and highlight the differences between the two. This illustrates how innovations and new affordances emerge through AI-driven disinformation (see Table 1). Consistent with existing literature, we define disinformation as the deliberate creation or dissemination of false information intended to deceive recipients (see Chadwick & Stanyer, 2022). Disinformation lacks facticity—meaning it is not grounded in relevant expert knowledge (Vraga & Bode, 2020)—and aims to influence recipients’ ideological beliefs, political behavior, consumer decisions, or interpersonal relations. Beyond individual-level effects, disinformation can also function strategically to undermine democratic institutions, electoral processes, or marginalized groups (Bennett & Livingston, 2018).
Comparison of Traditional Disinformation and AI-Driven Disinformation.
AI adds a specific technological dimension to this phenomenon. Most commonly, AI’s role has been framed in terms of production dynamics—for example, enabling the creation of increasingly realistic synthetic media such as deepfakes, AI-generated images (Weikmann & Lecheler, 2023), or cloned audio (Barrington et al., 2025). In line with research on visual disinformation (Hameleers et al., 2020; see also Dan et al., 2021; Weikmann & Lecheler, 2023), as well as AI-powered voice clones (Barrington et al., 2025), AI-generated content may circumvent users’ suspicions by exploiting the realism heuristic, creating the illusion of unfiltered truth (Sundar et al., 2021). Furthermore, it was recently reported that AI systems can lie and spread disinformation “intentionally” or of their “own accord,” for example, to satisfy human users with generated answers to questions or to avoid being replaced by another AI model (Mitchell, 2025). Possible reasons for this behavior could lie in the models’ pretraining and the specific posttraining they receive through human feedback (see Mitchell, 2025). This form of AI-generated disinformation—sometimes called “AI hallucination”—as well as its implications for journalism and mass communication, should also be thoroughly investigated in the future.
However, AI’s influence extends beyond production. It also plays a substantial role in the dissemination of disinformation (through algorithmic amplification, automated fake account creation, synthetic personas, or targeting of specific audiences), and shapes the perception of disinformation threats (Dan et al., 2021). Public anxiety around emerging AI technologies may, in turn, heighten perceived risks associated with information manipulation (Yan et al., 2025).
Working Definition of AI Disinformation
Against this background, we propose the following
AI and Disinformation: A Double-Edged Sword
Does AI act as a catalyst for the creation and spread of disinformation? The jury is still out. To date, different experimental studies suggest that AI-powered disinformation, mostly studied in the context of deepfake videos, does not yield stronger effects than other forms of disinformation, such as textual disinformation (e.g., Barari et al., 2025). Even more so, cheapfakes that do not rely on AI but on the decontextualization of existing footage can be slightly more credible than AI-generated deepfakes on the same issue. In line with this conclusion, other studies found that deepfakes do not directly deceive recipients (e.g., Dobber et al., 2020; Vaccari & Chadwick, 2020). However, consistent with the mechanisms of defensive motivated reasoning observed in other disinformation contexts, AI-powered disinformation may influence political beliefs among segments of the population already predisposed to support the source or content of the message (e.g., Dobber et al., 2020), potentially contributing to increased political polarization over time (see Kubin & von Sikorski, 2021). Furthermore, results suggest that deepfake videos do not yield a clear credibility advantage over other forms of visual disinformation (e.g., Weikmann et al., 2025 in this special issue), which supports the notion that AI-driven disinformation is not definitively more dangerous and influential compared to other forms of disinformation.
Yet, there is also evidence that we should not completely disregard the affordances of AI. AI technology is rapidly evolving, and also becoming more accessible to utilize for ordinary social media users, as the widespread dissemination of new applications and plug-ins such as
Another major issue of AI-disinformation is related to the difference between credibility and plausibility, and the influence that even low credibility deepfakes may have on political beliefs. Hameleers et al. (2024), for example, demonstrate that deepfakes that significantly deviate from political reality are generally not perceived as credible. Nevertheless, such deepfakes exert the strongest impact in lowering evaluations of the depicted political actor. In this sense, even implausible deepfakes where content manipulation is detectable to recipients can still have substantial effects on political attitudes and beliefs (see also Weikmann et al., 2025, in this special issue).
Furthermore, a crucial implication of AI disinformation that is often overlooked in extant literature relates to its discursive and perceptual dimensions. Alarming messages voiced by organizations such as the WEF (2024) are difficult to ignore, and many citizens across the globe are often exposed to threat framing on the consequences of disinformation, and AI disinformation in particular (e.g., Simon & Altay, 2025; Yan et al., 2025). Beyond such well-intentioned warning messages and threat frames, AI disinformation may also be weaponized, and used as a blame-shifting weapon to delegitimize opposed truth claims or political enemies (e.g., Hameleers & Marquart, 2023).
This perspective on the wider consequences of AI disinformation resonates with the literature on disinformation and “fake news” as labels that are used to fuel distrust, polarize public opinion, and score political points, especially on the radical fringes of the political spectrum (Egelhofer et al., 2022). In a climate of pervasive distrust toward mainstream media, science, and other established sources of knowledge (Newman et al., 2025), AI-driven disinformation, whether as an accusation or a threat frame, can further amplify uncertainty about what or whom to believe. It may also be used to delegitimize accurate yet incongruent information by falsely claiming it was generated by AI (Hameleers & Marquart, 2023).
Indeed, as shown in other contexts, labels of disinformation or misinformation may prime the idea of imminent deception, motivating people to deviate from the default mode of accepting the accuracy of (visual) content or image-based proof (Egelhofer et al., 2022; van der Meer et al., 2023). Talking about AI disinformation, either with the good intention to warn or the malicious intention to deceive, can come with severe consequences for the ways in which media users navigate the increasingly more complex information landscape, and sow distrust in accurate information (see also Lindgren et al., 2024; Tsfati et al., 2020).
Fighting Disinformation: Deploying AI Against AI-Driven Falsehoods
The double-edged nature of the convergence between AI and disinformation becomes evident in the fact that AI is not only used to create disinformation campaigns, but also holds promise for detecting and combating them (e.g., Feuerriegel et al., 2023). In this context, various measures have been discussed that enable journalists and other communicators—such as political social media influencers (von Sikorski, Merz, Heiss, Karsay, et al., 2025), who may both disseminate (see Harff et al., 2022) and correct disinformation (von Sikorski, Merz, Heiss, Bassler, et al., 2025)—to respond effectively to identified falsehoods. Although detecting and combating increasingly sophisticated forms of AI-generated disinformation remains generally challenging (e.g., Feuerriegel et al., 2023), the literature on countering disinformation identifies a range of intervention strategies. These include fact-checking (Hameleers & van der Meer, 2020; Walter et al., 2020), correction of false claims (Walter & Murphy, 2018; see also Christner et al., 2024; Heiss et al., 2024; as well as Sun et al., 2025; Tang et al., 2025; in this special issue), preemptive techniques such as inoculation or prebunking (i.e., psychological forewarning; e.g., Dan et al., 2021; van der Linden et al., 2017; see also Zhang et al., 2025 in this special issue), and the promotion of media literacy to enhance individuals’ ability to recognize disinformation (see Vogler et al., 2025 in this special issue). Many of these approaches can be supported or enhanced through the targeted application of AI technologies.
Emerging research examined how audiences respond to AI involvement in fact-checking (Banas et al., 2022; Chae & Tewksbury, 2024; Chung et al., 2023; Opdahl et al., 2023). For example, Chung and colleagues (2023) showed that labeling fact-checking sources as AI reduced motivated reasoning when participants evaluated message credibility while partisan bias persisted when the source was labeled as human experts or a human-AI hybrid. Chae and Tewksbury (2024) showed that fact-checking remains effective in improving truth discernment, even when audiences are aware that AI was involved in the process. Notably, awareness of AI’s role sometimes reduced perceived political bias, particularly among Republican participants.
Overall, research on the role of AI in combating disinformation is still in the early stages, yet various technological innovations are already emerging as potential mitigators. A recent study conducted in the United States, for example, demonstrates that both human and AI-generated influencers (i.e., “virtual influencers”) were found to be effective in correcting health-related misinformation (von Sikorski, Merz, Heiss, Bassler, et al., 2025). In contrast, only the AI-generated influencer was successful in correcting misleading information related to climate change, an issue that remains highly polarized in the United States. The correction delivered by the human influencer, however, had no measurable effect. Although further research is clearly needed, AI-supported correction mechanisms may represent a promising approach that proves both effective and scalable, making them a key tool in the future fight against disinformation, and its potential widespread dissemination in AI-fueled information ecologies.
Special Issue Summary
The contributions in this special issue explore (a) how different formats and modalities of AI-generated disinformation affect public perceptions, emotions, and behaviors; and (b) which countermeasures—such as inoculation strategies, corrections, and media literacy—prove most effective in resisting AI-enhanced deception. Together, the articles included in this volume illuminate the shifting terrain of AI and disinformation and offer both empirical insights and theoretical frameworks to better understand—and respond to—the profound challenges of our evolving AI-infused media environments. In the following, we briefly outline the articles of the special issue.
Section I: AI and the Cognitive, Emotional, and Behavioral Impact of Disinformation
Kang and Valadez conduct a meta-analysis and synthesize findings from 24 experimental studies to assess how deepfakes influence credibility, emotions, and sharing intentions. The analysis shows that deepfakes tend to heighten emotional responses. Media literacy was found to reduce the perceived credibility of deepfakes and decrease users’ willingness to share them. Interestingly, participants with lower levels of self-perceived media literacy experienced stronger emotional reactions to deepfakes. Overall, the study highlights that media literacy—depending on its type and the video topic—can help mitigate the negative effects of deepfakes.
In the next contribution, Rasul, Calabrese, Oh, Cho, Jeon, and Boukes explore how people’s perceptions of misinformation (PMI) and disinformation (PDI) influence their willingness to consume news from traditional media, social media, and AI-generated sources. A pre-registered experiment using a quota-based sample in the United States tested how these perceptions interact with political ideology and media trust. The findings show that higher levels of PMI and PDI reduce intentions to consume all three types of news. This effect was consistent regardless of the news format.
Weikmann, Egelhofer, and Lecheler investigate the effects of different types of visual disinformation on public perception. While deepfakes often dominate the conversation, simpler forms like cheapfakes and decontextualized videos are more common and less understood. In an online experiment, participants viewed one of three manipulated videos portraying the same false message about a politician. Although all videos were rated low in credibility, both deepfakes and cheapfakes still led to misperceptions, and the deepfake in particular negatively affected views of the politician. The findings highlight that even low-credibility visual disinformation can shape attitudes and understanding.
Section II: Countermeasures: AI-Driven Corrections and Interventions to Protect Against AI-Generated Disinformation
Tang, Fang, Sun, Bode, and Vraga explore how source expertise, AI involvement, and the timing of corrections (prebunking vs. debunking) affect their ability to reduce misperceptions about raw milk and related consumption intentions. In a two-wave online experiment, debunking proved consistently more effective than prebunking, with effects lasting at least 1 week. Expert sources slightly outperformed nonexperts in reducing misperceptions, but only in the initial wave. Notably, the use of AI in generating corrections did not significantly impact their effectiveness. These findings suggest debunking is a reliable strategy while AI offers potential for scalable corrections without undermining credibility.
The next article by Sun, Shen, Choi, Borah, Wagner, Shah, and Yang examines whether AI-generated visuals and credibility cues can improve the effectiveness of correcting health misinformation. In an experimental study, the researchers tested visual exemplars, infographics, source tagging, and partisan-neutral posting histories. The results show that visual exemplars slightly outperformed text-only corrections by reducing psychological resistance and lowering misbeliefs. Infographics and credibility cues, however, did not significantly boost the effectiveness of corrections. Overall, the findings suggest that while AI-generated visuals can enhance persuasion, their impact is modest and depends on the type of visual used.
Next, Zhang, Kim, and Scott address the question of how to effectively counter the democratic threat posed by political deepfakes. They explore whether inoculation strategies—designed to build resistance against misinformation—can reduce the credibility and influence of deepfake content. In a controlled experiment, participants were exposed to different inoculation modes and politically aligned or misaligned deepfakes. The findings show that inoculation generally enhances awareness, increases fact-checking intentions, and lowers trust in deepfake messages. However, when participants viewed deepfakes that opposed their political views, they were more likely to believe the embedded disinformation mischaracterizing the differing political position, highlighting the powerful role of partisan bias in the reception of deepfakes.
Vogler, Rauchfleisch, and de Seta investigate how accurately people can detect high-quality deepfakes and whether a short media literacy intervention improves their detection abilities. In an online experiment in Switzerland, many participants struggled to distinguish deepfakes from real videos based on brief visual cues. The literacy intervention alone had no direct effect on detection performance. However, participants with prior experience with deepfakes or higher media literacy benefited more from the intervention. The results suggest that isolated interventions are insufficient, underscoring the need for broader, long-term digital literacy strategies.
Conclusion and Implications for Journalism and Mass Communication
In this essay, we examine the growing convergence of disinformation and AI, and propose a working definition of AI disinformation that allows for sensitivity to different contexts and different applications of disinformation. Against the backdrop of the often inconclusive and at times contradictory research evidence presented in this special issue and the broader literature, we adopt a nuanced perspective, contending that it is too early to determine how AI will reshape the disinformation landscape in the years to come.
Importantly, our aim is to move beyond a binary discourse—between, on one hand, dystopian warnings of an AI-driven “information apocalypse,” and, on the other, skepticism regarding whether AI-generated disinformation poses any meaningful threat. We argue that a deeper empirical and theoretical scholarly investigation is needed to understand the role of AI within the broader context of an increasingly complex and evolving disinformation ecosystem. This special issue represents a first and much needed step in that direction.
In our view, the influence of AI disinformation is real but must be considered in a nuanced way. It is shaped by individual level factors such as specific predispositions, ideologies (e.g., Rasul et al., 2025 in this special issue; see Wang et al., 2024), and competencies, including media literacy (Vogler et al., 2025 in this special issue), as well as structural factors such as political and media systems with low resilience to the influence of disinformation (Humprecht et al., 2020). As such, a multi-level approach to the effects of AI disinformation, in which resilience is not understood on either the system (i.e., success of populist communication and media trust), technological (i.e., the potential of new AI tools) or individual level (i.e., media literacy), but rather as an integration of different vulnerable contexts that could offer an opportunity structure for AI disinformation to thrive is critical.
At the same time, empirical research indicates that people are not always easily deceived by deepfakes (e.g., Vaccari & Chadwick, 2020), that the effects during election campaigns appear limited (Simon & Altay, 2025), and that individuals often rely on prior knowledge, plausibility assessments, and contextual cues when evaluating content. In line with these findings, the effects of AI disinformation should not just be contrasted to all other forms of disinformation, but should be contextualized against the backdrop of more hybrid and gray zones of (visual) disinformation and deception that currently exist, such as cheapfakes and visual decontextualization.
However, the rapid development and democratization of AI technologies—particularly through their seamless integration into social media platforms and content creation tools—warrant continued vigilance regarding their potential to undermine information integrity. Indeed, initial findings suggest that AI systems are capable of lying and spreading disinformation “intentionally” for various reasons (Mitchell, 2025). This presents additional challenges not only for individual users but also for various communicators, such as journalists who rely on AI-generated content to support their work processes. Chatbots may anticipate human information preferences and generate content that appears credible but is in fact false (Mitchell, 2025). This increases the risk that users may unintentionally spread misinformation that originates from “intentionally” generated AI disinformation.
This dual reality—heightened technological capability coupled with limited but context-dependent persuasive effects—poses a particular challenge for journalism and mass communication. While direct deception may be less widespread than public narratives and perceptions suggest (e.g., Yan et al., 2025), the broader discursive and perceptual consequences of AI disinformation should not be underestimated. Public narratives about its supposed unprecedented impact, combined with the strategic use of “AI disinformation” as a delegitimizing label (Hameleers & Marquart, 2023), gain traction in highly polarized, distrustful, and epistemically uncertain environments (also see Labarre, 2025). In such climates, AI becomes not only a generator of fabricated content but also a symbol of epistemic instability, with the potential to erode trust in both genuine journalism and factual information.
For the journalism profession, these dynamics demand a careful response. Newsrooms and fact-checkers must be equipped with the skills, tools, and verification protocols needed to identify and respond to AI-generated content, without overstating its threat in ways that inadvertently deepen public cynicism toward all information (e.g., Hameleers & Marquart, 2023; see also Adami, 2024; Newman et al., 2025; Tsfati et al., 2020). Effective reporting will require a balance between transparency about technological risks and an avoidance of disproportionate alarmism and discourses of moral panic (Jungherr & Schroeder, 2021).
The convergence of AI and disinformation is eventually best understood as a double-edged sword: while AI is increasingly used to generate and amplify disinformation, it also holds potential for its detection and mitigation (Feuerriegel et al., 2023). As several contributions in this special issue show, the field is beginning to gather more detailed data about both the risks and opportunities that this convergence entails. Emerging intervention strategies, such as fact-checking (e.g., Hameleers & van der Meer, 2020), correction of false claims (Walter & Murphy, 2018; see also Christner et al., 2024), and inoculation approaches (van der Linden et al., 2017) can be enhanced through the targeted use of AI technologies. Communicators, including journalists and political influencers, are increasingly central to correction of disinformation, underscoring the importance of understanding how AI can support their roles. Initial research also suggests that audiences may respond differently depending on whether fact-checking is attributed to humans or AI, with AI-labeled sources sometimes reducing perceived bias and motivated reasoning (Chae & Tewksbury, 2024; Chung et al., 2023; von Sikorski, Merz, Heiss, Bassler, et al., 2025). Although still in its early stages, research into AI-assisted countermeasures reveals a growing set of scalable, context-sensitive tools that may play an increasingly vital role in addressing the evolving disinformation landscape.
From a scholarly perspective, these developments call for a holistic research agenda that investigates not only the direct effects of AI-generated disinformation across contexts and audiences, but also the indirect and discursive effects that influence public perception, political polarization, and institutional trust. Future research should track the evolving interplay between AI affordances, audience skepticism, and the socio-political contexts in which disinformation is produced and consumed. Given the pace of technological change, effect studies must be updated regularly, with meta-analyses (e.g., see Kang & Valadez, 2025 in this special issue) and systematic reviews synthesizing emerging evidence.
Finally, an important implication for both journalism practice and media literacy initiatives is the need to strengthen individual and societal resilience to both actual AI-disinformation and the false labeling of factual content as AI-generated. Building such resilience will require interdisciplinary cooperation between journalists, technologists, social scientists, and educators to develop interventions that promote critical engagement without fostering corrosive distrust. In this sense, the challenge of AI disinformation is not solely a technological one, it is equally a question of remaining true to the epistemic foundations of journalism and mass communication.
