Abstract
Introduction: Sense of Dataveillance and Digital Communication
In the summer of 2020, the president of the United States signed an executive order to suspend temporary work visa renewals (Ordoñez, 2020). A graduate student commented on this news on Twitter, writing that they had wanted to say something about this, but did not, fearing a negative impact on their upcoming visa renewal application. They further wrote, “we should not have to think about these things.” Apparently, a fear of producing digital traces that could have negative repercussions prevented this individual from freely voicing their opinion about a political issue. Judging the potential consequences of one's actions has certainly always influenced people's choices—which is often a desirable outcome—but this example brings out several specificities: The behavior in question, commenting on news online, is mundane and minor; the associated potential consequence, being denied a visa renewal, is far-reaching. The practice of using applicants’ digital traces in visa-granting decisions is not transparent. Further, voicing an opinion on government actions is entirely permissible, regardless of immigration status. As a form of political participation, this behavior is even socially desirable. And, crucially, the reason a possible impact was assumed in the first place is
This article aims to understand how people's sense of being subject to dataveillance may cause them to restrict their digital communication behavior. This undercuts the vital role accorded to digital communication in contemporary society—potential negative outcomes of digital communication subjected to surveillance may stifle digital media use for everyday activities, personal development, societal participation, or political advocacy. Although these
The main contributions of this article are three-fold. First, we develop a theoretical model (integrating insights from media and communication science, law, surveillance studies, and social psychology) focusing on explicit causal mechanisms. Second, we analyze existing empirical studies and original survey data to gauge the magnitude of the chilling effect phenomenon in everyday life. Third, the theoretical model's propositions and empirical finding's limitations are consolidated to demonstrate the requirements for future research. With these three contributions, we hope to motivate a research agenda that will spawn innovative empirical study designs and iterative model revisions aimed at understanding and measuring chilling effects of digital dataveillance.
The System-Level Context of Individual-Level Chilling Effects
Broad dangers of surveillance include discrimination and persuasion (Richards, 2013). Here, we focus on an additional aspect, chilling effects, that has received little attention in the context of dataveillance (Penney, 2017). The potentially harmful consequences of chilling effects for pillars of deliberative democracies, such as freedom of expression, speech, and thought, were recognized nearly half a century ago: individuals who believe they are under surveillance may preemptively self-inhibit free speech and behavior (White and Zimbardo, 1975). In the meantime, fast-growing and ubiquitous dataveillance has severely intensified the problem.
Although the concept of chilling effects goes back even further, articles that rely on a similar definition to the one used here emerged in the 1970s (Schauer, 1978; White and Zimbardo, 1975) with reference to a 1965 US legal case regarding the free exercise of basic rights. Schauer (1978) argued that errors made in favor of free speech are preferable to the wrongful suppression of free speech. The cause of the chilling effect in this literature was the fear of legal prosecution and the uncertainties of this process. After the September 11 attack, the concept resurged and was applied to counter-terrorism surveillance: Solove (2006, 2007) explicitly introduced surveillance as an inhibitor of people's legitimate activities and acknowledged the indirect nature of the risk. The practice that has a chilling effect, surveillance, is not directed at basic rights in democratic systems, but affects them nonetheless; and Solove (2006) noted that “awareness of the possibility of surveillance can be just as inhibitory as actual surveillance” (p. 495).
In many societies, digital media have become the most convenient means to perform everyday activities such as seeking information or exchanging ideas. Beyond active content creation or “liking” and “following,” merely being online produces persistent digital traces and data. The growing collection and analysis of this (big) data (van Dijck, 2014) is automated, continuous, inexpensive, and opaque. It serves corporations to optimize services and profits, and states to ostensibly increase national security. This far-ranging dataveillance system, the systematic monitoring of data reflecting the actions and communication of individuals (Clarke, 1988), may produce a diffuse sense of being constantly watched, potentially deterring people from permitted or even socially desirable behavior. Dataveillance can thereby lead to self-censorship, conformity, and anticipatory obedience. Such chilling effects inhibit the exercising of fundamental rights and consequently constitute a subtle, cumulative risk for individual autonomy, well-being, and democratic participation in digital societies (see Véliz, 2020). Chilling effects are not a uniform or inevitable consequence of dataveillance; some people may alternatively or additionally increase the protection of their data (Chou and Chou, 2021) or engage in sousveillance (Mann and Ferenbok, 2013). Chilling effects are one of many processes in a complex system of constantly changing digital communication and are in a sense volatile as well as dependent on further influences, such as dispositions, situations, or type of communication behavior. Our notion of inhibited digital communication behavior certainly includes but is not limited to acts of political participation online—the cumulative effects of undue deterrence from small acts of digital communication to satisfy individuals’ personal or social needs in everyday life are just as relevant. Figure 1 provides a rough sketch of individuals’ chilling effects in a system-level context. Although there are further links between these elements, this simplified cycle serves to structure the following discussion of relevant concepts.

Individuals’ chilling effects in a system-level context.
In the early 21st century, the nonstop generation of vast amounts of digital traces related to people's everyday digital communication surged with the establishment of Google, Facebook, and other platform companies. In 2013, documents leaked by Edward Snowden revealed the extent of global dataveillance, yet interest in privacy and self-protection in everyday digital media use was short-lived (Preibusch, 2015). The more subtle, longer term consequences have only recently been recognized with the concept of chilling effects in light of dataveillance, and empirically with studies by Penney (2016) on Wikipedia, Marthews and Tucker (2017) on Google Search, and Stoycheff (2016) on Facebook. The focus has been on state surveillance and intelligence agencies, yet people are increasingly confronted with negative outcomes due to corporate (Noble, 2018) or state–corporate partnership (Schneier, 2013) dataveillance.
Our model is developed from the perspective of a liberal democracy where digital communication is ubiquitous in everyday life. Yet the chilling effects mechanism may likely be more evident and less subtle in countries with weaker protection of free speech and more authoritarian governments, where for example, the use of virtual private networks is banned or providers of messaging apps have to identify their users (Bakir, 2021).
Societal Impacts and Governance
The isolated chilling effect results in the suppression of behavioral intention. The
In societies where the (dataveilled) use of digital media is de facto non-optional for everyday functioning, such chilling effects on digital communication, including but not limited to exercising fundamental rights, may be a long-term and cumulative risk for individual autonomy and collective action, warranting further academic attention. A holistic perspective on chilling effects, therefore, includes a macro perspective on the governance arrangements that steer dataveillance practices as a response to the societal risks of inhibited digital communication. Individuals in a specific population will vary greatly in their general preferences regarding digital communication and in their responses to an increased sense of dataveillance. Nonetheless, a population or social system can be characterized by its average level, or baseline, of digital communication without self-inhibition; communication regarded as normatively desirable in the social system (i.e. “non-chilled”) (Figure 2). An exogenous event, or shock, that makes dataveillance salient is expected to cause a relatively sudden drop in this digital communication. The magnitude of this drop will depend on the specifics of the shock such as the breadth and severity of the dataveillance practice revealed. Typically, drastic self-inhibition in the immediate aftermath of a shock characterized by extreme uncertainty regarding personal affectedness will quickly subside (see Preibusch, 2015) and a “back to normal” recovery phase ensues. Any lasting effect, for instance, a minor overall decrease in the level of expressing personal views through digital communication, indicates an incomplete recovery, and over time, a trend to a new baseline (equal to the initial baseline minus the sum of unrecovered digital communication across all shocks). The denser the succession of shocks, the more overall chilling occurs as subsequent self-inhibition interrupts the previous recovery phase. This temporal accumulation of short-term chilling effects to long-term risk is loosely inspired by excitation transfer theory—a model of emotional reactivity where residual excitation from preceding experiences carries over (see Zillmann, 2008; Zillmann et al., 1972)—and Hebbian theory—a biological learning mechanism where a repeated activity induces lasting change at the level of neurons (Galluppi et al., 2015; see Suri, 2004).

How imperfect recovery from shocks may lead to long-term chilling of digital communication.
Current discussions surrounding technological solutions to contain the spread of COVID-19 illustrate the broader societal issues of dataveillance: shortly after drastic lockdown measures were implemented across the globe in early 2020, a plethora of countries launched tracing apps and other health-based surveillance technologies (e.g. thermal scanners) as promising measures against an uncontrollable spread of the virus (Chiusi et al., 2020). Objecting institutions and actors raised concerns—accusing governments of technological solutionism (see Morozov, 2013) and fearing a long-term normalization of mass surveillance (Chiusi et al., 2020) with potentially unforeseen effects—especially since the effectiveness of these measures was not proven. At the core of this conflict of interests is the pursuit of a balance between public safety (including health) and individual autonomy (see e.g. Broeders et al., 2017; van Brakel, 2016). This discussion also shows the popular opinion that even if such tracing apps do not achieve their stated purpose (i.e. containing the spread of the virus), they at least “do no harm.” Potential subtle, long-term effects of such dataveillance practices on communication behavior—as described by the chilling effects literature—often remain invisible (see Vitak and Zimmer, 2020).
In the aftermath of data scandals or personal negative experiences, the focus is on the immediate reaction, which may be substantial self-inhibition, but this effect tends to dissipate over time. We propose, however, that this recovery is likely to be incomplete, particularly as reports and public awareness about dataveillance are intensifying, imperceptibly leading to a lower societal level of “normal,” unrestricted digital communication (see Figure 2). Although the focus here was on shocks like data scandals that generally induce negative outcomes for individuals, in principle, appropriate governance mechanisms could invert the plotted trajectory and cause an
The societal relevance of contemporary dataveillance leading to chilling effects lies in the “lost potential” of digitization and the internet: the uninhibited flow of human knowledge is
Dataveillance Practices and People's Sense of Dataveillance
The amount of data on individuals’ lives that is generated and can be collected and analyzed has tremendously increased as a result of digitization. In digitized societies, digital services are often the most convenient and effective way to perform everyday tasks. This includes the mundane, such as looking up directions on Google maps, watching a Netflix series, talking to friends on Facebook Messenger, or buying products online. Higher involvement activities such as researching health information, professional collaboration, or political expression are often enabled by the same devices, services, and platforms. The result is that these diverse communication behaviors leave digital traces, that is, they have become datafied (van Dijck, 2014; Ytre-Arne and Das, 2021).
The stated corporate interests in dataveillance are, for example, maximizing user engagement and influencing behavior through the personalization of ads, search results, and recommendations (e.g. Boerman et al., 2017; Esposti, 2014); on the part of government-led dataveillance, public safety is generally cited, ostensibly achieved through counter-terrorism activities and predictive policing based on digital traces (e.g. Andrejevic, 2017; Lyon, 2014). Over the past couple of years, through Snowden's NSA revelations in 2013, the Cambridge Analytica case in 2018 (Cadwalladr and Graham-Harrison, 2018), and lesser scandals, everyday users of internet-based technologies have become increasingly aware that they are being constantly tracked. These data are used detached from context, combined with data from other users, and analyzed in ways that the individual has no control over. With the outbreak of COVID-19, in a very short time, this system of dataveillance has broadened in scale and scope to additional forms of data collection such as thermal scanners, GPS-based location services, and facial recognition systems, imposing “a new normal” based on pervasive and health-based surveillance (Chiusi et al., 2020).
This dataveillance here refers to surveillance, that is, “the focused, systematic and routine attention to personal details for purposes of influence, management, protection or direction” (Lyon, 2007: 14) but based on digital traces. The critical shift lies in its automation, making dataveillance attractive for businesses and governments even when there is no “suspicion” or immediate purpose, “just in case.” The lower cost per unit, remoteness, lower visibility, and continuous real-time nature facilitate these practices (see Clarke, 1988; Marx, 2002; van Dijck, 2014). Dataveillance may be very broad, including all types of sensors and data related to different entities; of primary interest here is the subtype of profiling automated by algorithms as the “recording and classification of data related to individuals” (Büchi et al., 2020: 2).
Although particularly the field of law and emerging surveillance studies have initiated the discussion related to basic rights like freedom of expression, there is a distinct lack of theory on the contributing factors to chilled, or inhibited, general digital communication that would propose clear hypotheses for empirical examination. The current consensus in the literature is merely that chilling effects are a “potential danger” of surveillance, subject to empirical confirmation. Drawing on the extant literature that has intersected with digital communication behavior (e.g. Penney, 2016), we propose that, at the individual level, a sense of dataveillance is the dominant cause of communicative
Inhibited Digital Communication Behavior
After having situated chilling effects in a system of dataveillance practices arguing that the cause of long-term shifts in digital communication is affected by exogenous shocks, we zoom in on the individual level process of self-inhibition, which is where the main contribution of this article lies. Ytre-Arne and Das (2021) suggest that people “often know that their engagements leave traces that form patterns and feedback loops, but also that the full extent of these are beyond transparency, rendering the prediction of outcomes of communicative exchanges less apparent” (p. 14). Thus, while day-to-day routines do not constantly foreground potential negative outcomes, data scandals, and other salience shocks “remind” people of the extensive practice of dataveillance. An expected reaction to an increase in awareness to this massive system of dataveillance is self-restraint in order to increase conformity with perceived societal norms (e.g. Manokha, 2018): a chilling effect, which can also be understood as anticipatory obedience or self-censorship. This deterrence from or suppression of (legitimate) behavior existed long before digital technologies came to fundamentally pervade all domains of everyday life in modern societies (White and Zimbardo, 1975) but has attained new significance in light of dataveillance. If people know they are being watched—or believe they know, regardless of the factual extent of dataveillance—this can deter them from exercising permitted or even socially desirable behavior. This behavioral modification occurs without the state or corporations ever directly exerting power (see e.g. Büchi et al., 2020; Marthews and Tucker, 2017; White and Zimbardo, 1975), making chilling effects a latent and subtle process, and thus potentially under-acknowledged as a risk.
The internet and the digital media based on it are uniquely predisposed to serve information seeking and self-expression, yet pervasive, indiscriminate dataveillance through the algorithmic construction of personal data profiles may suppress these practices (see Büchi et al., 2020; Friedewald, 2018; Hildebrandt, 2008). Many potentially chilled behaviors have received very little attention due to their seeming mundaneness. Yet everyday behaviors such as researching a contentious or an entirely noncontroversial topic online, sending messages on a smartphone, or liking other people's posts on social networking sites make up the fabric of social life in the digital society. The hypothesis in the existing research is that surveillance acts as a normalizing gaze (Richards, 2013) upon these activities, a sense of being watched and judged, that suppresses “experimentation with the unorthodox” (Cohen, 2000: 1426).
The imprecise but perhaps intuitive formulation of the chilling effect hypothesis in the existing research is that
The sense of dataveillance will primarily affect the person's attitude toward digital communication (see Bräunlich et al., 2021)—basically, the individual's evaluation of whether digital communication, and thus leaving digital traces, is “a good idea”—this evaluation need not be extensively processed cognitively. More technically: “attitude toward the behavior is assumed to be a function of readily accessible beliefs regarding the behavior's likely consequences” (Ajzen, 2020: 2). This is the “point of entry” of dataveillance practices into individuals’ communication behaviors (see Figure 3): an increase in dataveillance salience, such as through news reports, is expected to heighten people's sense of dataveillance and thereby the accessibility of the belief that digital communication behavior can entail personal repercussions (negative outcomes), which would make the attitude toward the behavior less favorable, which then decreases the intention to engage in the behavior.

Mechanisms of the chilling effect of dataveillance practices on digital communication.
A number of mediating and moderating processes are predicted by TPB—applied to dataveillance and digital communication, the most likely source of observing differential effects of dataveillance salience on inhibited digital communication behavior are background factors (sociodemographics, personality, values, etc.). For instance, noncitizens may be much more wary of researching or posting content critical of the government. Further, “algorithmic profiling that facilitates the inclusion of different sources and types of data is likely to contribute to increasing entanglements of protected identities, thus creating new categories and groups of people that experience forms of intersectional discrimination” (Mann and Matzner, 2019: 5)—individuals potentially affected by such discrimination may justifiably form beliefs of negative outcomes, which mediate how the sense of dataveillance ultimately impacts digital communication. Salience shocks, for example, through media reports or negative experience, will on average increase the level of perceived dataveillance, but there will be qualitative differences in how people update their conceptions of how dataveillance works (Ytre-Arne and Moe, 2021). The more dataveillance is attributed to human or human-like actors, the higher the individual susceptibility to chilling effects (Festic, 2020; Siles et al., 2020).
Applying TPB, the behavior of interest is here defined as engaging in digital communication that produces digital traces in everyday life—for example, for the purpose of information-seeking, self-expression, social coordination, or entertainment. Consequently, the other elements of the mechanism (see Figure 3) are defined in relation to this behavior of interest. We accordingly formulate the general, isolated chilling effect hypothesis as a microlevel causal mechanism triggered by an external factor.
Toward an Empirical Test of Chilling Effects
Insights From and Gaps in Existing Studies
Assessing the prevalence of chilling effects and the necessity for governance interventions requires a solid empirical basis. The most critical issue in the current state of research in the field is that there is overall extremely limited empirical research. Studies on the chilling effects of dataveillance have only been prompted very recently by the revelations surrounding the NSA's surveillance practices (Penney, 2016). In addition to a few qualitative studies that incidentally touch on chilling effects (e.g. Lupton, 2020), there is a very limited body of quantitative research in the field.
In an experimental study, Stoycheff (2016) surveyed 255 US participants in an online questionnaire with a priming manipulation in a Facebook post: those who were primed that their social media usage would be surveilled, and who had opinions diverging from the mainstream regarding US airstrikes on ISIS, were particularly likely to be deterred from posting their opinions. In addition to a concern of being socially isolated, Stoycheff (2016) suggests fear of prosecution by the government as a plausible contributor. However, a precise mechanism and externally valid setting are needed. Further, the role of direct and indirect negative outcomes due to corporate dataveillance has not been considered, especially empirically. In another experimental setting, Stoycheff et al. (2019) found that higher perceived online government surveillance chilled not only participants’ likelihood to engage in illegal activities online but also deterred intentions to engage in legitimate political activities online (e.g. sharing opinions, criticizing the government). This effect was found both for a demographically diverse sample and a group of adults who identified as Muslim.
In addition to these experimental studies, there have been attempts to measure chilling effects in cross-sectional observational or survey research. To the best of our knowledge, all existing observational studies rely on the NSA revelations as a natural stimulus. An innovative study on Wikipedia use analyzed how web traffic to privacy-sensitive articles (e.g. Al Qaeda, Iraq, Nationalism) changed after the publicization of NSA surveillance (Penney, 2016). The main finding was that traffic to these articles immediately and significantly declined after the revelations—although clearly, reading up on any of these topics is entirely legitimate citizen behavior. In a very similar manner, Marthews and Tucker (2017) compared the search volume of selected privacy-sensitive keywords on Google in 11 countries before and after the NSA surveillance revelations. They found significantly lower traffic for search terms of which the participants were concerned that they could get them in trouble with the US government (e.g. Chemical Spill, Explosion, Tuberculosis) after the 2013 surveillance disclosures. On an international scale, Google users were also found to be less likely to use search terms that they perceived as potentially sensitive (e.g. Atheism, Herpes, White Power). Also at the aggregate level of search volume, Rosso et al. (2020) found a significant and lasting increase in the use of the DuckDuckGo search engine (whose unique value proposition is to not profile its users) directly following the NSA revelations.
A large survey in Norway also addressed this topic and indicated the existence of significant chilling effects on online behavior. In 2014, significant proportions of the Norwegian population indicated having decided to not make a purchase (28%), to not sign a petition on the internet (26%), or to talk face-to-face rather than communicate electronically (26%) because they were unsure how such digital traces would be used in the future (Teknologirådet and Datatilsynet, 2014). The results also revealed that many would be more careful about their online searches (27%) or their posts (24%) if intelligence services were to surveil their everyday internet use (Teknologirådet and Datatilsynet, 2014). The results from the 2019 wave were very similar, revealing chilling effects due to government surveillance regarding looking for information about sensitive topics and expressing opinions online; chilling effects on digital communication behavior due to concerns about how private companies use their data appeared to be even more pronounced (Datatilsynet, 2020).
Indications of Widespread Chilling Effects
This article developed a theoretical model connecting dataveillance and inhibited digital communication (see Figure 3). Before attempting a comprehensive empirical test thereof, we need to know whether there is good reason to assume that chilling effects are a part of everyday internet use for a substantial number of people in the first place. To initially assess the magnitude of the phenomenon, we included several questions in large-scale surveys. An initial result was that more than half of internet users felt dataveillance deterred them from self-expression or information seeking, ranging from rarely to always; chilled self-expression was substantially more frequent than chilled information-seeking (Latzer, Büchi, et al., 2019). Here, we report additional original results from a second, dedicated and more detailed dataveillance survey module within a research project in Switzerland that investigates the role of algorithms in everyday life in a population-level sample (Latzer, Festic, et al., 2020). The sample of 1202 respondents is representative of age, gender, region, household size, and employment status for internet users aged 16 and over. Respondents were sampled by an independent social and market research company, and the online questionnaire was fielded in three languages between late 2018 and early 2019.
The items were introduced by stating that the interest was in whether respondents adapted their online behavior to potential risks. It was necessary to include the behavior in question (e.g. information seeking) and a cue to dataveillance in a single item as not to displace reports of subtle self-inhibitions by more prominent reasons for certain communication behaviors (e.g. simply not being interested in a topic and thus not seeking information about it). The four statements relate to

Indicators of chilling effects distributions. Note. Items were worded so that higher values indicate stronger chilling effects (NA reflects non-response and “don't know” answers). Respondents reported their agreement with four statements suggesting dataveillance as a cause of certain behaviors.
Overall, the reported experiences of chilling effects due to dataveillance appear relatively similar across major sociodemographic groups (see Figure 5), particularly considering that many practices and attitudes related to internet use exhibit significant variation across age and sex (Latzer, Büchi, et al., 2020). Within age groups, the only significant (i.e. where the confidence intervals do not overlap) sex difference in the chilling effects indicators concerned opinion sharing: on average, women aged 60 to 79 were more likely to self-inhibit this digital communication behavior than men of the same age. In general, younger internet users tended to report smaller chilling effects, but differences are in the realm of no more than half a point between the youngest and oldest age groups on the 1 to 5 scale.

Indicators of chilling effects by age and sex. Note. Vertical bars represent 95% confidence intervals; horizontal lines represent overall (solid) and group means (dashed). Y-axis indicates means on a discrete scale: 1 “do not agree at all” to 5 “strongly agree.”
Consequences for Further Research
Although there is evidence for the existence of chilling effects in the context of dataveillance—both from existing research and the new data provided here—the empirical study designs have not been able to provide robust accounts of actual chilling effects. The general lack of empirical studies is presumably and partially due to disciplinary roots in law, where empirical research is comparably rare, as well as its only recent adaptation to the specifics of dataveillance as opposed to “traditional” surveillance. Moreover, the chilling effects hypothesis has a few inherent characteristics that make it difficult to test empirically (Table 1).
Current challenges and recommendations for further research.
Empirical study designs will need to combine the strengths of multiple complementary methods listed in Table 1, but many difficulties remain as employing one measurement strategy may impinge on the feasibility of another. An experimental component, where participants are randomly assigned to conditions—for example, with varying intensities of (manipulated) sense of dataveillance and a control group—would allow strong conclusions on the causality of chilling effects. This set-up, however, would need to be implemented not in a lab but in situ to ensure external validity. To this end, techniques of ecological momentary assessment such as smartphone-based mobile experience sampling could be adopted (see Doherty et al., 2020; Kubey et al., 1996; Schnauber-Stockmann and Karnowski, 2020). Even in a large-scale mixed-method study along these lines, many exogenous factors relevant for different manifestations of chilling effects would remain unaccounted for. Here, simulation approaches can provide further insights, for example, through agent-based modeling (see Bruch and Atwell, 2015; Epstein, 1999; Jackson et al., 2017). In an empirical investigation in a relatively stable social system, the level of perceived legitimacy of dataveillance will not vary considerably, yet this variable can be targeted by regulations and policies. A simulation can artificially vary relevant variables with low practical variance even in long-term studies, such as regulations, to predict the effects they may have on behaviors. As we introduce more variables, empirical tests of all possible combinations of these variables are increasingly unfeasible, but simulations can easily experiment with this “behavior space” and point to likely or interesting cases that warrant an empirical test. Finding a mix of theoretically appropriate and feasible methods remains challenging because all the above approaches again entail challenges. Researchers will perhaps need to delimit a manageable part of the full model (Figure 3) and balance different methods’ strengths and weaknesses with available resources.
Most importantly, the lack of and shortcomings of empirical studies first call for further theoretical specification of the connection between dataveillance and inhibited digital communication behavior—an endeavor for which we hope to have laid the foundation. If researchers know exactly what they are looking for, they will be more likely to find it and derive relevant conclusions, not least for governance options—or, if findings are inconsistent with the theory, at least have greater confidence in the true absence of the effect.
Conclusion
People's sense of being subject to digital dataveillance can cause them to restrict their digital communication behavior—such a chilling effect is a self-inhibition in everyday digital media use with the attendant risks of undermining individual autonomy, well-being, and democratic participation. There is evidence of chilling effects—both from existing studies and the new data provided here, yet a robust validation requires further empirical research. The framework presented provides the underlying causal model that unpacks the process between individuals’ sense of dataveillance and their digital communication behavior.
Several theoretical and empirical gaps remain; the following questions may present productive avenues for future research regarding the scope, process, prevalence, and governance of the chilling effects of dataveillance:
The outlined theory of the chilling effects of dataveillance does not suggest that entirely uninhibited digital communication is a silver bullet for democracy or that the things people search for or post online should never have consequences. Rather, such consequences need to be reconciled with people's expectations of privacy and their commensurability. In the example in the introduction, a proportionate consequence of expressing one's—in this case presumably strongly opposing—views on the suspension of visa renewals would be that someone else replies with a strongly supporting opinion. If the first person then in the future chooses to not express their views on such topics because the confrontation was experienced as negative, this is not problematic; however, the automated collection of such digital traces and compiling them into data profiles for any use in the future, out of context, and without the knowledge of the data subject
In the greater context of the impact of the internet on society, chilling effects are one process among countless; analytically isolating this mechanism has been the goal of this article. Dataveillance can suppress political voices, but at the same time, democracy does not require the most open form of communication—in particular when online communication merely forestalls “real antagonism” (Dean, 2005). But beyond political behavior, the urgency of studying the chilling effects of dataveillance lies in digital communication's role in daily life: internet use is a valuable resource in fulfilling everyday needs such as information, interaction, transaction, or entertainment. Fear of undue negative consequences from useful and legitimate digital communication need not be added to the list of existing digital inequalities (Van Dijk, 2020). The deciding factor in researching a topic or talking to someone online, offline, or not at all should not be driven by the fear of being profiled in the case of choosing the online option.
