Abstract
Introduction
The degree to which the social is intertwined with the Internet in the digital age is nicely encapsulated by the concept of “digital citizenship.” Mossberger et al. (2008) define digital citizenship as “the ability to participate in society online” and frame it as a prerequisite for social inclusion (p. 1). This perspective aligns with research on digital inequalities, which highlights the exclusionary implications of lacking access to digital media and failing to participate online (DiMaggio et al., 2001). Proponents of the digital citizenship concept argue that citizens need to be educated in “safe and responsible behavior online” (Jones and Mitchell, 2016: 2064), so that they refrain from risky practices, such as sexting, and avoid falling prey to abusive behavior, such as cyberbullying. However, for most users, participation in society online is associated with risks, such as online harassment, spam, hacking, or identity theft (Blank and Lutz, 2018; Dodel and Mesch, 2018).
Academic literature has long investigated such phenomena within the field of online privacy. On the one hand, threats to the integrity of users’ personal data are likely to affect their social, economic, and mental well-being. On the other hand, digital participation is impossible without sharing personal data (Ellison et al., 2007; Kane et al., 2014; Krasnova et al., 2010). Previous attempts to theorize information sharing, such as Communication Privacy Management theory, have argued that users are often aware that by sharing their information they extend its ownership to their audience (Petronio, 2002). However, with the expansion of social media and the ubiquitousness of mobile technology, users struggle with estimating the sizes and compositions of their online audiences (Litt, 2012). In addition, more recent and mobile-based social media platforms, like Instagram and Snapchat, include design principles aimed at promoting habitual use (Bayer et al., 2015), incentivizing users to share (Chen et al., 2017). As a result, and paradoxically, most Internet users report both high levels of privacy concerns and high levels of private information disclosure, while abstaining from rigorous privacy-protective behaviors (Young and Quan-Haase, 2013).
Recent reflections on “surveillance capitalism” (Zuboff, 2019) or “data capitalism” (West, 2019) highlight (a) the role of digital platforms as critical social infrastructures of modern society, imposing significant disadvantages on individuals who refrain from their usage, and (b) the extraction of personal data as a constitutive characteristic of digital platforms’ business models. Such perspectives underline that individuals can no longer meaningfully participate in society without paying with their personal data as a kind of entrance fee. The so-called privacy paradox (Kokolakis, 2017), therefore, can be understood as an indicator of this new social reality, as platform users are concerned by the commodification of their data, yet continue to share personal data to achieve social inclusion.
The privacy calculus has emerged as the most prominent model to explain the privacy paradox, particularly in regard to institutional privacy threats (Raynes-Goldie, 2010; Young and Quan-Haase, 2013). According to this approach, users weigh the expected benefits from online transactions against the perceived risks. When the benefits override the risks, users will disclose personal information and the privacy paradox occurs (Dinev and Hart, 2006). However, the privacy calculus assumes that users act rationally and have full agency, which is often not realistic. More recent approaches in privacy research attempt to more explicitly account for the complexity of privacy-related decision-making by arguing that many individuals have resorted to privacy fatigue, privacy cynicism, surveillance realism, or privacy apathy (Hoffmann et al., 2016; Choi et al., 2018; Dencik and Cable, 2017; Hargittai and Marwick, 2016). As a result, instead of adapting their behavior, users have developed coping mechanisms to manage the tension between online participation as a “digital citizen” and the risks deriving from digital platforms’ access to personal data.
However, the mentioned studies on privacy fatigue, privacy cynicism, surveillance realism, and privacy apathy are, to this stage, mostly based on qualitative research and not strongly developed in conceptual terms. In this article, we provide more generalizable evidence on the phenomenon, discussing findings from a large-scale survey on online privacy in Germany. In particular, we test a nomological model of privacy cynicism. We show that powerlessness and mistrust are the most prevalent dimensions of privacy cynicism in Germany, while resignation is the least pronounced. Our findings contribute to the burgeoning online privacy literature (Baruh et al., 2017) and connect it to the growing debate around the surveillance economy and data capitalism. They also provide a quantitative approach to a theme of powerlessness experienced by Internet users when it comes to participating in the digital society. A key contribution is the differentiation of dimensions of privacy cynicism. As our analyses show, these dimensions have distinct effects on privacy protection and are associated with established constructs such as privacy concerns and Internet skills in different ways.
Literature review
The paradox of online privacy
Many fundamental privacy theories include, at their core, the notion of control. Westin’s definition of privacy was grounded on individuals’ self-determination of what to disclose and what to keep private (Westin, 2003; Westin and Ruebhausen, 1967). Altman (1977) claimed that individuals achieve their optimum level of privacy through a process “dependent on [their] ability to control interactions with others” (p. 67). Building on Altman’s idea of privacy as a non-monotonic process, modern approaches have conceptualized privacy as contextual (Nissenbaum, 2004) or situational (Masur, 2018). Especially within Masur’s (2018) situational approach, privacy is evaluated in each specific context, and control (through, for example, audience management) is put in place so that self-disclosure can take place. Taken in absolute terms, these approaches suggest that, as a response to a concerning privacy context, individuals would limit their self-disclosure. Yet several studies have found that, on the Internet, users reveal substantial amounts of sensitive personal data despite high privacy concerns. The concept of a “privacy paradox” (Barnes, 2006) describes discrepancies between privacy attitudes and behavior (Norberg et al., 2007). More recent research has determined a weak, if any, effect of privacy concerns on online self-disclosure and privacy protection (Dienlin and Trepte, 2015; Kokolakis, 2017).
However, the empirical evidence on the privacy paradox is mixed and many theoretical explanations have been attempted (Baruh et al., 2017; Kokolakis, 2017). According to Kokolakis (2017), more studies find evidence in favor of the paradox than against it. Yet in a meta-analysis of 166 studies, Baruh et al. (2017) report that, on aggregate, privacy concerns have a small negative effect on the use of online services and a small positive effect on privacy protection behavior. Social network sites (SNS), however, stand out as the exception where a paradox between concerns and behavior is found (Baruh et al., 2017). This highlights how SNS might have become pieces of critical social infrastructure, where participation is unavoidable, and the balance between user privacy and inclusion is challenging to manage. The exceptional status of SNS in the meta-analysis also points to the importance of context in the study of online privacy (Nissenbaum, 2004). Particularly it is important to distinguish between institutional privacy threats, as posed by institutions such as platform providers, and social privacy threats that emanate “horizontally” from other users (Raynes-Goldie, 2010; Young and Quan-Haase, 2013). Previous research has shown that users react more strongly to social than institutional privacy threats (boyd and Hargittai, 2010), indicating a higher level of either indifference or helplessness when facing threats emanating “vertically” from institutions.
Scholars have developed different explanations for the absence of an effect of privacy concerns on user behavior. Among these, the
Another explanation for the privacy paradox focuses on a
In this study, we will advance our understanding of privacy by drawing on a newly developed concept that complements the available explanations for the privacy paradox: privacy cynicism. The concept of privacy cynicism describes users’ attitude toward data protection and privacy, within a context of limited subjective agency (Hoffmann et al., 2016). Users report a feeling of powerlessness when faced with a complex and opaque online landscape, where platforms, institutions, and other users have unprecedented access to their data (Hoffmann et al., 2016). This feeling of digital resignation (Draper and Turow, 2019), particularly when faced with institutional privacy threats, has also been addressed under the terms privacy apathy (Hargittai and Marwick, 2016), privacy fatigue (Choi et al., 2018), and surveillance realism (Dencik and Cable, 2017). In the next two sections, we will discuss current perspectives on surveillance capitalism (Zuboff, 2019) and data capitalism (West, 2019) and then, informed by these critical accounts, review conceptualizations of user powerlessness in the form of surveillance realism, privacy apathy, and privacy fatigue. Finally, we will present privacy cynicism as our take on the topic and derive the hypotheses for our research model.
Data capitalism: challenges to user agency
The first definition of the Internet as a Panopticon was formulated by Campbell and Carlson as early as 2002. Drawing upon Foucault, the authors observe how consumerism leads users to share private information “in the belief that [they] will ultimately benefit from such disclosure through convenient access to goods and or services” (Campbell and Carlson, 2002: 592). In the last decade, data collection by online platforms has increased in comprehensiveness as algorithms rely on behavioral data to adapt and expand online services. This has generated a system of data capitalism, where the surveillance of users generates value (West, 2019). According to Zuboff, these personal data are both made necessary for users to access services, and endlessly monetizable for the platforms collecting it, analyzing it, and selling it to third parties (Zuboff, 2019).
A common thread in the surveillance capitalism or data capitalism critique of digital platforms is their challenge to user agency. Referencing Mark Zuckerberg’s description of SNS as “social infrastructure”, West (2019) highlights that “[u]sers are placed in a double bind, caught between desires for privacy and the ability to form meaningful communities with other users online without opting out of these services.” (p. 37)
Hence, the challenge to user agency in relation to data protection becomes vastly more complicated in the context of social media platforms. Several studies highlighted how, without a sufficient level of self-disclosure, users will find it hard to establish social connections (boyd, 2007; Kane et al., 2014). Social media platforms eagerly frame self-disclosure as a contribution to the community, repaid by the self-disclosure of other members (Ellison et al., 2007). Recent critics, however, point out that, thanks to the harvesting of behavioral data, platforms might have eroded users’ choice not to share personal data (West, 2019; Zuboff, 2019). This might reflect in a disconnect between individuals’ abstract concept of what privacy should be, and what they obtain while participating on the platforms (Sujon, 2018).
New perspectives: surveillance realism, privacy apathy, and privacy fatigue
Dencik and Cable’s (2017) concept of
Hargittai and Marwick (2016) introduce the concept of
Choi et al. (2018) develop the concept of
While surveillance realism, privacy apathy, and privacy fatigue present important steps toward understanding how citizens try to resolve the tension between social inclusion and privacy online, they come with at least two shortcomings. First, the studies are based on small and specialized samples, making generalizations problematic. Second, the concepts largely neglect previous research and theories in other fields that could help structure the phenomenon (as an exception, see Draper and Turow, 2019). To address these limitations, we propose privacy cynicism as a related and suitable concept. It is motivated by social psychology and tested through more generalizable data. In the following, we discuss the concept of privacy cynicism and embed it into a nomological model with related constructs such as privacy concerns, Internet skills, and privacy threat experience.
Privacy cynicism
The concept of privacy cynicism was developed to help in our understanding of the privacy paradox, particularly in the context of institutional privacy threats (Hoffmann et al., 2016). It represents a cognitive coping mechanism, allowing users to overcome or ignore privacy concerns and engage in online transactions, without ramping up privacy protection efforts.
Cynicism has been explored primarily in dyadic relationships. It implies assumptions over the motives of an interaction partner (Mills and Keil, 2005), who is presumed to be driven by self-interest, and eager to take advantage of the person concerned. As trust is based on assumptions of competence, benevolence, and integrity (Bhattacherjee, 2002), cynicism implies a level of mistrust and antagonism (Almada et al., 1991).
Another important element associated with cynicism is a feeling of powerlessness. In dyadic relationships, when one of the two agents is left with little or no control over decision-making, they will be more likely to grow cynical about the other’s motives and actions (Dean et al., 1998). Previous research has linked cynicism to outcomes such as lack of institutional trust and engagement (Langworthy, 1987), and repercussions for well-being, such as burnout (Salanova et al., 2005).
While cynicism has been defined as either an attitude or a belief (Andersson, 1996; Dean et al., 1998), it also functions as a coping mechanism. Individuals resort to cynicism when they are unable to control the circumstances behind their decision-making. In this context, risks are not discounted, but rather perceived as inevitable, because they are entirely out of their control (Kanter and Mirvis, 1989). This perception supports inaction in the face of potentially harmful circumstances and presents an interesting lens through which online self-disclosure and privacy threats can be approached. In a scenario such as data or surveillance capitalism, where users think they have limited control over their information, privacy protection might seem useless.
As such, we understand privacy cynicism as an attitude of uncertainty, powerlessness, and mistrust toward the handling of personal data by digital platforms, rendering privacy protection subjectively futile (Hoffmann et al., 2016). We argue that a system of data capitalism, based on the harvesting of private data, coupled with devices designed to maximize user interaction might have made situational evaluations of available privacy (Masur, 2018) too complicated for users to take into account when choosing their desired level of disclosure (Hoffmann et al., 2016). In this context of ubiquitous institutional privacy threats, privacy cynicism can be understood as a cognitive coping mechanism because it allows subjectively disempowered users to participate in online platforms without cognitive dissonance since they rationalize privacy protection as useless.
We expect privacy cynicism attitudes to be negatively related to Internet skills and privacy protection (Hoffmann et al., 2016; Park, 2013), as high-skilled users should feel less powerless vis-à-vis service providers. Previous research on privacy and Internet skills has found a positive and strong effect of Internet skills on privacy protection (Bartsch and Dienlin, 2016; Büchi et al., 2017; Masur, 2018). We argue that more skilled Internet users have higher agency when it comes to data protection and privacy online, which should result in lower levels of disempowerment and thus cynicism. Therefore, our first hypothesis introduces an association between Internet skills and privacy cynicism:
Compared with Internet skills, privacy threat experience is more closely tied to specific security risks that occur online such as spam, hacking, and phishing. We argue that privacy threat experience and privacy cynicism are generally positively associated. Threat experience is likely to bolster mistrust in service providers. Particularly under conditions of uncertainty, threat experiences may also induce feelings of powerlessness and, ultimately, resignation. However, if users experience little uncertainty or high levels of efficacy, threat experiences could also lead to more protection behavior rather than resignation. So the relationship of threat experience and cynicism is likely to be moderated. For the sake of clarity, our model will not delve into such moderation effects, and given low privacy literacy among large segments of Internet users (Bartsch and Dienlin, 2016), we propose a net positive effect:
Privacy concerns are a key construct in privacy research. Their interrelation with privacy protection behavior and online self-disclosure is at the heart of the privacy paradox literature (Baruh et al., 2017; Kokolakis, 2017). Privacy concerns go beyond awareness and experience by capturing worries. We argue that such concerns are connected to cynicism, especially if user agency is perceived to be diminished (powerlessness). Higher levels of privacy concerns are associated with higher levels of attention to privacy issues, specifically privacy threats. As a result, feelings of mistrust and uncertainty should be related to privacy concerns:
The first three hypotheses focus on privacy cynicism as a dependent construct; however, within our nomological model, privacy cynicism also serves as an independent construct: We argue that privacy cynicism and privacy protection behavior are associated. Cynicism, especially in a strong sense of powerlessness and resignation, renders privacy protection behavior subjectively futile (Hoffmann et al., 2016). Choi et al. (2018) tested this effect for privacy fatigue and found a positive and significant effect on disclosure intention. Since privacy protection behavior is an attempt to limit disclosure and keep certain information non-accessed, requiring cognitive effort, we propose that privacy cynicism will inhibit such activity:
While the relationship between privacy concerns and privacy protection behavior lies at the heart of the privacy paradox discussion, based on the latest empirical findings, we hypothesize a positive relationship (Baruh et al., 2017; Kokolakis, 2017):
Privacy threat experience and privacy concerns are related but conceptually distinct constructs. Given their similarity, we suggest that there should be a positive relationship between the two. More specifically, users with higher levels of privacy threat experience will likely be more concerned, as privacy risks are more visible and tangible to them:
Internet skills could help assess online privacy in a more nuanced and fact-based way. By doing so, Internet skills could alleviate unfounded concerns. Highly skilled Internet users can be expected to perceive higher levels of efficacy and control, rendering them less vulnerable to privacy threats. We therefore propose a negative relationship between Internet skills and privacy concerns:
Finally, we a postulate a negative association between Internet skills and privacy threat experience. Internet skills imply the ability to proactively use the Internet in different ways. Such skills, while largely implicit, also entail specific knowledge on the more explicit side, for example, in terms of security and privacy, which can protect against privacy threat experience:
Figure 1 shows the overall model. In the following, we first explore the dimensionality of privacy cynicism before testing the nomological model developed.

Research model.
Methods
Data
We used data collected through an online survey in Germany between November and early December 2017. Access to the sample was provided by a certified market research institute. A total of 1008 respondents completed the survey. Gender and age quotas were applied to ensure a sample composition roughly equivalent to the overall German population; 516 (51%) respondents reported being female and 492 (49%) being male. The average age in the sample was 50 years with a standard deviation (
Measures and method
To measure privacy cynicism, we developed an original scale based on previous conceptual work and qualitative studies as well as a number of small-scale quantitative student surveys. Ultimately, 27 items relating to privacy cynicism were included, all measured on 1–5 Likert-type scales (1 = strongly disagree; 5 = strongly agree). Prior to the survey, we conducted two pilot student surveys in German (
We measured privacy concerns with three items from Malhotra et al. (2004). Privacy protection behavior was measured with five items from Milne et al. (2004) as well as Youn (2009). Privacy threat experience was measured with three items from a set of items developed by the authors based on commercial polls on privacy issues in Germany. Internet skills were measured with seven (out of 30) items from Hargittai’s (2009) scale, which queries respondents for their knowledge of Internet and computer terms. Respondents had to indicate their level of understanding of these terms using a 5-point scale that ranged from 1 = no understanding to 5 = full understanding. We had included one bogus item for control (proxypod) that was not used in the SEM. All six remaining items loaded neatly on one factor and revealed high internal consistency (Cronbach’s α = .90). Due to the relatively high correlation between pdf and advanced search and lower loading than the other four items, we dropped these two items in the SEM, leaving us with a four-item measurement for Internet skills (spyware, wiki, phishing, cache).
All constructs had good internal consistency and convergent validity, except for privacy protection behavior and privacy threat experience, which proved to be problematic in terms of the factor loadings. Nevertheless, we retained the constructs due to their importance in the overall model and the lack of a more established alternative. Based on the Fornell–Larcker test, discriminant validity can be assumed (Fornell and Larcker, 1981). The wording of the constructs used is displayed in Supplemental Appendix A, the measurement in Supplemental Appendix B, and the discriminant validity test in Supplemental Appendix C. Due to high correlation of the error term between the second and third privacy protection item (potentially due to similar question wording, starting with “Asking a website . . .” and resulting in a modification index of 243.131), we allowed the covariance between the error terms of these items to be freely estimated.
Results
Descriptive statistics and dimensionality
The four factors comprising the dimensions of privacy cynicism are: mistrust (seven items), uncertainty (six items), powerlessness (five items), and resignation (five items) (Table 1). All components displayed high internal consistency and the overall solution had very good sampling adequacy, as seen in the high Kaiser–Meyer–Olkin (KMO) value of .90 (Kaiser and Rice, 1974). Mistrust had an arithmetic mean across all items of 3.50 (
Exploratory factor analysis.
SEM
The SEM had good model fit, Chi-Square = 881.075 (
Structural equation model for privacy cynicism dimensions.
DV: dependent variable.
Column 1: not bold: independent variables, Column 2: standardized path coefficients (standard errors), Column 3:
Table 3 shows the result for hypotheses related to privacy protection behavior (H4 and H5). Our findings reveal a mixed picture and weak support for H4. Only one of the cynicism dimensions has the expected effect, namely resignation. Individuals who are more resigned protect their privacy less. Mistrust is significant but positive. Thus, users who have high levels of mistrust in Internet companies protect themselves more. The two remaining dimensions of uncertainty and powerlessness are insignificant. H5 is supported, as privacy concerns have a positive and pronounced effect on privacy protection behavior. Thus, we find no evidence for the privacy paradox in our model.
Structural equation model for privacy protection behavior.
DV: dependent variable.
Column 1: not bold: independent variables, Column 2: standardized path coefficients (standard errors), Column 3:
Table 4 shows the results for the remaining hypotheses. We find support for H6 as privacy threat experience has the expected positive effect on privacy concerns. Internet skills and privacy concerns, by contrast, are not significantly associated (H7). Finally, Internet skills have a positive and significant effect on privacy threat experience, rather than the predicted negative effect. Thus, H8 is rejected.
Structural equation model for privacy concerns and privacy risk awareness.
DV: dependent variable.
Column 1: not bold: independent variables, Column 2: standardized path coefficients (standard errors), Column 3:
Figure 2 provides an overview of the hypotheses. Five of eight hypotheses are supported or mostly supported, while three hypotheses are rejected or partially rejected. Most variance could be explained for privacy protection behavior (Table 3). Among the privacy cynicism dimensions, uncertainty can be best explained with the three predictor constructs (Table 2), with an explained variance of 27%, followed by mistrust (16%), powerlessness (12%), and resignation (9%).

Summary of SEM results.
Discussion and conclusion
As digital platforms, and social media in particular, emerge as “social infrastructures” of the digital age, research focuses on the evolving preconditions for participation and inclusion (Blank and Lutz, 2018; Micheli, 2016; Van Deursen et al., 2017). Users’ self-disclosure is one such precondition (Kane et al., 2014), and social media platforms are eager to frame the act of sharing as a contribution to the member community (Ellison et al., 2007; West, 2019). This is facilitated by the fact that, especially online, intimacy in relationships requires more extensive self-disclosure (boyd, 2006, 2007; Vitak, 2012). In addition, as the concept of digital citizenship (Mossberger et al., 2008) indicates, an argument can be made for the importance of digital platforms in the promotion of participation in society and social inclusion. However, the evolution of digital platforms into critical infrastructures comes at a high cost for user agency. Individuals face severe disbenefits if they refrain from the use of digital platforms (West, 2019; Zuboff, 2019). With social media platforms, such disbenefits combine the loss of personalized offers with the depletion of an important source of connection, resulting in social isolation, exclusion, or ostracism.
As highlighted by analyses of data capitalism (West, 2019) and surveillance capitalism (Zuboff, 2019), convenience as well as fundamental relational needs might lead individuals to feel “trapped” in the role of platform users. This can lead to disempowerment, hopelessness, and resignation when it comes to data protection. This development sheds new light on research addressing the so-called privacy paradox, especially in the context of institutional privacy concerns, by questioning assumptions of user rationality and agency. Recent studies have introduced concepts such as surveillance realism (Dencik and Cable, 2017), privacy apathy (Hargittai and Marwick, 2016), privacy fatigue (Choi et al., 2018), digital resignation (Draper and Turow, 2019), and privacy cynicism (Hoffmann et al., 2016). The latter concept was first derived from a review of cynicism literature and substantiated based on qualitative focus group data (Hoffmann et al., 2016). This article presents a preliminary measure of privacy cynicism, taking the multidimensionality of the concept into account. Our empirical analysis, based on a large-scale survey of Internet users in Germany, is the first to contribute quantitative empirical evidence to an emergent stream of online privacy research.
Our findings highlight the multidimensionality of privacy cynicism:
This also speaks to the understanding of privacy as contextual (Nissenbaum, 2004) and situational (Masur, 2018) as users may not perceive the same level of resignation in all settings and situations. As users are exposed to both vertical (i.e. institution-based) and horizontal (i.e. peer-generated) threats, they might perceive the latter as more worrying and worthy of their active attention (Sujon, 2018; Young and Quan-Haase, 2013). Accordingly, resignation may be less of a trait and more of a state. It should be noted, however, that in a digital domain, vertical and horizontal pressures interact: mistrust toward a platform may impede social interactions, and conversely, social concerns may lead to adjustments in platform use.
Powerlessness emerges as the most prevalent dimension of privacy cynicism. For our respondents, lacking control over the sharing of personal data online appears as the most salient dimension of privacy cynicism. This finding is especially relevant to privacy research, which tends to assume user agency as informational self-determination (Westin and Ruebhausen, 1967; to a lesser degree Altman, 1977), an assumption that may have to be reconsidered in the context of surveillance capitalism. In contexts or situations characterized by powerlessness, users may resist, withdraw, or, in some instances, resign, rather than regulate their level of self-disclosure. Beyond privacy research, the centrality of powerlessness to the phenomenon of cynicism should be of note. Digital platforms offer affordances for participation and inclusion, implying a potential for empowerment. However, in a context of surveillance and data capitalism, this potential may well be trumped by mistrust and powerlessness in relation to those platforms ostensibly providing the infrastructure for participation and inclusion. This again highlights how institutional pressures affect social dynamics. Policies and design interventions aiming at reducing or avoiding cynicism should be borne with the idea of giving agency back to users (Pybus et al., 2015). Kennedy and Moss (2015) discuss user agency in the context of social media data mining and offer three avenues for more empowered users: greater public supervision and regulation of data mining; making the tools for data mining, including software and literacy, more broadly available to the public; and initiatives that engage the public and stimulate reflections on their data practices (e.g. through good data journalism such as the Guardian’s Reading the Riots project). A multi-stakeholder approach, combining several or all of these avenues, could be a fruitful approach to tackle powerlessness.
Mistrust also appears as an important component of privacy cynicism, especially as it pertains to institutional privacy. Previous research has highlighted that trust plays a relevant role as an antecedent to users’ information sharing (Krasnova et al., 2010), and that users are relatively untrusting toward Internet services such as SNS (Lutz and Strathoff, 2014; Kim et al., 2012). This study extends such findings by highlighting how widespread mistrust plays into users’ feelings of privacy cynicism. Our results suggest that higher privacy concerns generate more mistrust, independent of skills and threat experience. Generally, mistrust can predict privacy protection behavior, which speaks for conceptualizing cynicism multidimensionally, as mistrust and resignation co-occur among cynical users when situations are characterized by uncertainty and powerlessness. Conversely, in a situation where users feel they lack control over their data, having trust in the organizations handling their data may partially compensate for their powerlessness.
Our study also aims to establish a quantitative approach to privacy cynicism and the larger emergent research stream focusing on privacy apathy, fatigue, or resignation in the context of “data capitalism” (Dencik and Cable, 2017; Hargittai and Marwick, 2016). The quantitative analysis allows for a differentiated understanding of distinct dimensions of cynicism, their respective prevalence, and effects. As an initial quantitative study, this analysis begs for further conceptual work to refine the relationships between salient constructs in the literature. As a first quantitative attempt to test privacy cynicism, this article has several limitations. First, while privacy literacy was at the core of our model, we did not control for the intensity of platform use of our respondents, which could influence their experience of privacy cynicism. We also have no indication of the respondents’ knowledge of the data leaks that took place in the last years, and whether this impacted their experience of cynicism. Recent developments in privacy research describe privacy as contextual (Nissenbaum, 2004) and situational (Masur, 2018). This study cannot fully delve into the role of privacy cynicism in distinct contexts and situations as it is based on a cross-sectional study examining privacy cynicism more broadly. Finally, while our non-probability sample offers a comparative structure to the German population, it is not representative. As such, the national generalizability of our results is limited. Future studies could (a) differentiate distinct platforms as critical social infrastructures, (b) distinguish social and institutional platform benefits as well as privacy threats, (c) delve deeper into antecedents of user agency, (d) examine the situational role of cynicism as a state, and (e) provide a more comprehensive overview of cynicism outcomes in terms of inclusion and well-being. Among the dimensions of cynicism, powerlessness, in particular, could be taken up in future research and investigated in relation to forced sociality and commodification of social capital in the vein of data capitalism or surveillance capitalism (West, 2019; Zuboff, 2019). As such, this article constitutes a further step in a young research stream, hoping to inspire future research.
Supplemental Material
appendices – Supplemental material for Data capitalism and the user: An exploration of privacy cynicism in Germany
Supplemental material, appendices for Data capitalism and the user: An exploration of privacy cynicism in Germany by Christoph Lutz, Christian Pieter Hoffmann and Giulia Ranzini in New Media & Society
Footnotes
Funding
Supplemental material
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
